How to code a recipe recommendation Telegram bot – Telegram Group

Let’s Cook with AI: How to Code a Recipe Recommendation Bot Using Telegram

A recommendation engine coded with python Telegram API and deep learning models. Take a picture of your fridge and wait for the best recipes.

Photo by Icons8 Team on Unsplash

Why you should read this post:

If you are interested in computational technologies and want to learn more about recognition engines, recommendation engines, and the telegram API, this post is made for you.

We will build together, step-by-step, a mini-application that can provide tailored recipes based on the current ingredients you have in your kitchen. To go further, we’re going to add a recognition engine that can recognize fruits and vegetables based on images you send to the bot. Why, you may ask? Simply because it’s fun and easy!

We will also use the Telegram API as it offers an excellent opportunity to create a working frontend for small projects. In addition to that, if you are using Python for your backend activities, you can use multiple wrappers to leverage many different options with their API. I also find that this is a convenient tool in case you want to develop simple IoT solutions.

I hope this post inspires you for other cool projects. If you have any questions or would like to share your projects, do not hesitate to contact me directly or leave a comment.

How are we going to proceed?

  1. The application overview.
  2. Deployment on your local machine.
  3. A bit of machine learning with code.
  4. Conclusion.

A quick tour of the app:

Sometimes one picture (or, in this case, a video) is worth a thousand words. That is why this GIF is here to show you the bot's main feature and how it works.

Based on what ingredients you have in your kitchen, the bot can generate a set of recipes chosen among various cooking styles (Italian, Greek, Japanese, etc.).

Gif by author

Try it by yourself! Deploy the bot on your local machine to fully enjoy the experience. Have fun!

Tips: I used gfycat to create the GIF. They propose fantastic features and possibilities. Don’t hesitate to try it by clicking here!

Try it on your local machine (step-by-step):

First, you need to create a bot in a couple of clicks with the Telegram app. Then, you will be able to deploy the app on your local machine.

Generate a telegram bot:

You can have a quick look at this post to create a Telegram bot with a token! Then after creating your bot, you will have access to its token! Keep this for yourself. Because if someone finds it, they can do whatever they want with your bot (Better be safe than sorry).

Deploy it locally:

If you are familiar with GitHub, you will have all the pieces of information about the bot’s deployment on the following GitHub page.

If you have Docker installed on your local machine, just run the command line below on your terminal:

docker run -e token=Your_token tylerdurden1291/telegram-bot-recipes:mainimage

Machine learning:

We will now dive deeper into the project, which I have split into four parts:

  • How can we make recommendations using the Doc2Vec model and a classifier that predicts the cuisine?
  • Image Recognition based on Deep Learning.
  • Setting up the Telegram Bot for the frontend.
  • Quick deployment thanks to docker.

How to make recommendations?

One of the most useful technologies that have grabbed more and more attention in the past six years is Natural Language Processing (NLP). In this field, researchers aim to break languages’ deep structure and their secrets through distinct processes. Let’s summarize this in three main points.

  • Tokenization: This process transforms the input text into a format that machines can understand. To summarize, the text is transformed into a series of vectors made of numbers.
  • Feature Engineering/modeling: As the text is now understandable for a machine, we can now extract some knowledge to perform a certain task. We call it feature engineering because we are extracting features from the input. It is also known as modeling since those feature extractors usually are statistical models.
  • Task: in NLP, many tasks can be performed by a model. From easy tasks, such as classification, to more complex tasks, such as question answering. In our little project, we will perform a classification task for the cuisine predictions and a similarity match to find the best recipes.

Type of recommendations:

When we discuss recommendations, the subject is quite vague. There exists a lot of different types of recommendation, depending on their purpose. For example, Netflix or Amazon make recommendations based on the content you are watching and on the similarity between other customers. In our case, we want to make recommendations based on the similarity between what ingredients you already have and the list of items of the recipes.

A good recommendation will be a set of recipes that share the same ingredients that you already have and the ones from the recipes. A rule-based solution could be possible, but it will create a considerable overhead to program it. Hence I chose to use the famous model called Doc2Vec. It is a generalization of Word2Vec at the sentence level. Therefore we can compute the sentence similarity instead of the word likeness.

To enhance the user experience, you can add a model before training your Doc2Vec on millions of recipes. Uber Eats and Deliveroo propose to select the type of cuisine before ordering. With the right dataset and a simple model, we can quickly implement the same feature. After in-depth research, I found a recipe dataset with food types(for ex: Italian, Mexican, French, Chinese, etc.). Therefore in the first step, the kind of food was predicted on the full recipe dataset using a logistic regression classifier. In a second step, a Doc2vec is trained for each type of cuisine. With this pipeline, we can be sure that our model will only propose recipes for a particular food type!

If you want to learn more about the model, I strongly encourage you to see the following links:

  • doc2vec
  • tf-idf
  • logistic regression

Dataset:

For this part of the project, we will need two different sources of data.

  • A first dataset to train the Doc2Vec model with the set of ingredients per recipe. The dataset that comes to mind is the recipes 1M, but their website is down when writing this post. But I found another quite similar one, called Recipes box.
  • A second dataset to train a model to predict the type of cuisine from a set of ingredients. Thanks to Kaggle subscribers, the Recipe Ingredients Dataset is doing the job correctly.

A bit of Coding:

As the tokenization process, I used a simple pipeline that lemmatized the text and looked for the words' root. Unuseful words called stopwords were also removed to decrease noises.

Code by author

I used feature engineering only for the prediction of the cuisine type. The TF-IDF embeds the input text before a logistic regression model predicts the kind of cuisine.

Code by author

To compute the similarity between the ingredients from recipes, we will use the famous Doc2Vec model. This model only needs a tokenization process because it is extracting features itself! Note that the Gensim library allows us to predict similarity matches with unknown text with a simple command.

Code by author

Let’s go further: add image recognition based on deep learning.

Image recognition is now a well-known task in machine learning, and deep learning models have proven to be the best and the most reliable for this task. But deep learning frameworks come with the need for enormous computational power. Hence, transfer learning techniques save a lot of time and overhead by using a pre-trained model instead of training a new model from scratch.

Dataset:

There exist many datasets for image recognition. I selected the following dataset, tailored for food recognition: GroceryStoreDataset has many photos of fruits and vegetables taken from multiple angles with different surroundings. It makes it the best available dataset for now.

A bit of coding:

As I am using a deep learning model, I need GPU resources, and thanks to Google, we can access one easily through Colab Notebooks. Hence, I share the notebook I used to train the Resnet model from a pre-train model available on Keras.

Image_classification_groceries.ipynb

Colaboratory notebook

drive.google.com

When the model is fully trained, here is the code to make predictions on a single image within the app.

Code by author

How to set up the Telegram Bot for the frontend:

Many APIs exist, and knowing how to use them is a useful skill. Telegram has been developing its API for many years and did a fantastic job doing so! One of the coolest things about this API is the number of wrappers that will simplify your life. In this project, I used the python-telegram-bot package that has proven to be one of the easiest to use among the available wrappers.

Coding:

To start with this wrapper, first, you need to install it using the pip command.

pip install python-telegram-bot

Before looking at my code, I would highly recommend you to focus on understanding the ConversationHandler schema. In their example folder, you will find available code and schema that you can easily implement locally. It saved me hours to compare the python files with their Conversational schema rather than just read code lines.

As you can see, the first layer handles the main buttons and the text or photos sent to the bot. A second layer exists for two distinct interactions:

  • When an image is sent to the bot, it is processed, it is passed to an additional function, a CallBack function able to retrieve the item touched by the user.
  • The same is happening when you hit the Get recipes button. As you can see in the GIF at the beginning of the article, you are asked to select the cuisine you want to cook after hitting this button. A second CallBack function is needed to retrieve the information the user selected.

Note how every final layer returns the main menu!

Code by author

How to quickly deploy the App with Docker:

Docker is an open-source software you might already have heard of. It creates containers with its OS to avoid software dependency problems when deploying solutions on other machines. I encourage you to learn Docker because you will work with it or with another similar software if you are working in computational engineering. For this project, I created one image that you can run on your local machine if you have docker installed instead of using GitHub. If you are new to Docker, I encourage you to see the following video!

Commands:

I will not give you a docker course and encourage you to follow one to understand what a container is. But the only thing you need to know is the following. Running a container means running a new OS on your device. And to construct this new OS, docker uses images. This post will present how I created the image for the project and how to run it.

Dockerfile are files that you run to generate the image. Hence, it is composed of a source image, usually a Linux OS container, and some commands. Those commands will allow us to download packages, set environment variables, and define the command that will be run when the container launches.

The Dockerfile:

# Base by image from python3.7 source image on amd64
FROM python:3.7# Create a argument variable to call when the image is started
ENV token token_init# copy the requirement within the image before importing the whole code and pip install everything
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt# Import the whole code in the image and change the working directory accordingly
COPY . /app
WORKDIR /app# Run the setup to import models and datasets
RUN bash setup.sh# Create models and the database
RUN python3.7 train_and_populate_recommendation_db.py# Define the command to run when the container will be started
CMD ["sh", "-c", "python3.7 app.py ${token}" ]

Of course, you won’t need to run it locally, the image is available on Docker Hub, and you can easily pull it and run the project with only one command. First, download docker on your local AMD (every personal desktop are AMD but raspberry pi are not, for example) machine and run the command below with your bot token:

docker run -e token=Your_token_gen_by_Fatherbot tylerdurden1291/telegram-bot-recipes:mainimage

Conclusion:

This post was motivated by the idea of sharing. Open-sourcing projects is the best way to avoid bottlenecks in the creation process of great ideas. Hence, I decided to share my code and, thanks to Medium, to present the project in my own words with this post.

In a nutshell, the Telegram API can help you develop some useful apps quickly, docker will help you to share and deploy your solutions, and Python has impressive features for rapid development!

If you liked this article, don’t forget to clap, and if you think it was interesting, you can check my GitHub page.

Ten articles before and after

TRX1 Dev Blog #1 (January 2021). TRX1’s January 2021 development report. – Telegram Group

Python ile Telegram Chatbot + Heroku – Telegram Group

Пошаговый мануал: Как создать бота в Телеграм? – Telegram Group

Пишем telegram бота для мониторинга сайта на Golang – Telegram Group

Bot Telegram Untuk Membantu Sekolah/Kuliah Kamu – Telegram Group

Telegram Bot Oluşturma. Telegram son günlerde ülkemizde ve… – Telegram Group

使用Python寫一個Telegram Notify – Lofi-nancier – Medium – Telegram Group

Автоматизация службы поддержки партнера iiko с помощью Telegram-ботов. Снижение нагрузки на колл-центр более чем в 2 раза (на 113%) – Telegram Group

Telegram Bot 2: OCR. Python + AWS Lambda + AWS API Gateway – Telegram Group

Manage IT infrastructure using Telegram bot or infrabot.io – Telegram Group