Convert Natural Language Text to SQL Query with LLM

convert-natural-language-text-to-sql-query-with-llm-large-language-model

LLM or Large language model like ChatGPT can answer almost anything. But it is not free. You need to pay if you want to use their API in your project 🧐. In this article I will explore how we can translate or convert any natural language text into a SQL query using an Open source LLM (free large language model) from Huggingface and Langchain in Python.

In this article I will cover below topics:

  • Create and Setup Google Colab notebook
  • Install Required Libraries
  • Import all Libraries
  • Download LLM model files from Huggingface
  • Configure Hugginface LLM Pipeline
  • Configure Langchain Template LLM Chain
  • Generate and Test SQL Queries

Create and Setup Google Colab notebook

To generate SQL queries from spoken language text we need a good text generation model. You have seen in my langchian tutorial article, that I was able to load text2text LLMs in CPU. But those small models is not capable enough to produce SQL code from any natural english question.

We need a good (atleast have 2-5 billion parameters) LLM which can convert our spoken text to a SQL query. But this kind of text generation model need a GPU to load. I was trying these models in CPU but end up with llama error which I was not able to solve. So I decided to implement in free Google Colab Notebook and it worked there. So I thought of sharing my experience with you.

Note: You can load LLMs with Huggingface free API. But the model which I am going to try in this tutorial, is not have API access (may be because of the size of the model).

First open a new Google Colab Notebook and go to Runtime -> Change runtime type. Select T4 GPU (or any other available GPU) from Hardware accelerator section and keep Runtime type as Python 3. Then save it.

google colab notebook configuration to run text to sql converter python code in gpu

To confirm whether you are getting cuda GPU backend or not run below command in colab notebook cell and check the output. As you can see in the below output, I am getting NVIDIA Tesla T4 GPU with CUDA Version: 12.2.

!nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       Off | 00000000:00:04.0 Off |                    0 |
| N/A   40C    P8               9W /  70W |      0MiB / 15360MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

When I was testing this code, in free Google Colab I was getting 12.7 GB System RAM, 15.0 GB GPU RAM and 78.2 GB Disk space. Which is more than enough for this project (SQL query generation from natural text).

Install Required Libraries

Okay now we are all set to start our coding. But as like other Python project, we should install all required libraries for this project. I have listed all commands to install those libraries below.

# Install Required python libraries to convert natural text to SQL queries
!pip install transformers
!pip install langchain
!pip install accelerate
!pip install bitsandbytes
!pip install sentencepiece
Requirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.35.2)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers) (3.13.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.20.2)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers) (1.23.5)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers) (23.2)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers) (2023.6.3)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers) (2.31.0)
Requirement already satisfied: tokenizers<0.19,>=0.14 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.15.0)
Requirement already satisfied: safetensors>=0.3.1 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.4.1)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers) (4.66.1)
Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (2023.6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (4.5.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (2023.11.17)
Requirement already satisfied: langchain in /usr/local/lib/python3.10/dist-packages (0.1.1)
Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.10/dist-packages (from langchain) (6.0.1)
Requirement already satisfied: SQLAlchemy<3,>=1.4 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.0.24)
Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /usr/local/lib/python3.10/dist-packages (from langchain) (3.9.1)
Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from langchain) (4.0.3)
Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.6.3)
Requirement already satisfied: jsonpatch<2.0,>=1.33 in /usr/local/lib/python3.10/dist-packages (from langchain) (1.33)
Requirement already satisfied: langchain-community<0.1,>=0.0.13 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.0.13)
Requirement already satisfied: langchain-core<0.2,>=0.1.9 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.1.11)
Requirement already satisfied: langsmith<0.1.0,>=0.0.77 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.0.81)
Requirement already satisfied: numpy<2,>=1 in /usr/local/lib/python3.10/dist-packages (from langchain) (1.23.5)
Requirement already satisfied: pydantic<3,>=1 in /usr/local/lib/python3.10/dist-packages (from langchain) (1.10.13)
Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.31.0)
Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /usr/local/lib/python3.10/dist-packages (from langchain) (8.2.3)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (23.2.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (6.0.4)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.9.4)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.4.1)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.3.1)
Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /usr/local/lib/python3.10/dist-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (3.20.2)
Requirement already satisfied: typing-inspect<1,>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (0.9.0)
Requirement already satisfied: jsonpointer>=1.9 in /usr/local/lib/python3.10/dist-packages (from jsonpatch<2.0,>=1.33->langchain) (2.4)
Requirement already satisfied: anyio<5,>=3 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2,>=0.1.9->langchain) (3.7.1)
Requirement already satisfied: packaging<24.0,>=23.2 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2,>=0.1.9->langchain) (23.2)
Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1->langchain) (4.5.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (2023.11.17)
Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy<3,>=1.4->langchain) (3.0.3)
Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1.9->langchain) (1.3.0)
Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1.9->langchain) (1.2.0)
Requirement already satisfied: mypy-extensions>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain) (1.0.0)
Requirement already satisfied: accelerate in /usr/local/lib/python3.10/dist-packages (0.26.1)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate) (1.23.5)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate) (23.2)
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate) (5.9.5)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate) (6.0.1)
Requirement already satisfied: torch>=1.10.0 in /usr/local/lib/python3.10/dist-packages (from accelerate) (2.1.0+cu121)
Requirement already satisfied: huggingface-hub in /usr/local/lib/python3.10/dist-packages (from accelerate) (0.20.2)
Requirement already satisfied: safetensors>=0.3.1 in /usr/local/lib/python3.10/dist-packages (from accelerate) (0.4.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (3.13.1)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (4.5.0)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (3.2.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (3.1.3)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (2023.6.0)
Requirement already satisfied: triton==2.1.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (2.1.0)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from huggingface-hub->accelerate) (2.31.0)
Requirement already satisfied: tqdm>=4.42.1 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub->accelerate) (4.66.1)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.10.0->accelerate) (2.1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub->accelerate) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub->accelerate) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub->accelerate) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub->accelerate) (2023.11.17)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.10.0->accelerate) (1.3.0)
Requirement already satisfied: bitsandbytes in /usr/local/lib/python3.10/dist-packages (0.42.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from bitsandbytes) (1.11.4)
Requirement already satisfied: numpy<1.28.0,>=1.21.6 in /usr/local/lib/python3.10/dist-packages (from scipy->bitsandbytes) (1.23.5)
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.10/dist-packages (0.1.99)

After installing those libraries, please restart your Colab notebook session or kernal, otherwise you may get below errors:

  • ImportError: Using low_cpu_mem_usage=True or a device_map requires Accelerate: pip install accelerate
  • ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`
  • LlamaTokenizer requires the SentencePiece library but it was not found in your environment
Also Read:  Abstractive Text Summarization in 12 lines with T5

Import all Libraries

Let’s now load all required Python packages to make this project workable. To translate natural text into a SQL query, we need mainly three Python libraries: torch, transformers, and langchain. We are going to use multiple functions from those libraries.

# Import required libraries to generate SQL queries from natural text or language
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM, pipeline
from langchain.llms import HuggingFacePipeline
from langchain import PromptTemplate, LLMChain

Download LLM model files from Huggingface

So we are done with notebook setup and library installation. All hard work is done. Let’s now start our code. First we need to choose a good LLM model which can produce correct SQL queries from a question asked in normal spoken english or any other language.

For this tutorial, I will use alpaca-native model. It is just an Huggingface replica of Alpaca model of Standford. This is just the base version. If you want large model of the same family, you can try Alpaca-13b or GPT4-X-Alpaca models and let me know the comparison in the comment section below. Both have around 13 billion parameters.

In this tutorial, I used alpaca-native model and it gave me good result. Below is the Python code to download this model files and tokenizer from Huggingface.

# Download model from Huggingface
alpaca_model = LlamaForCausalLM.from_pretrained(
    "chavinlo/alpaca-native",
    device_map='auto',
    load_in_8bit=True,
)

# Download tokenizer from Huggingface
alpaca_tokenizer = LlamaTokenizer.from_pretrained("chavinlo/alpaca-native")
download-large-language-deep-learning-model-to-convert-natural-language-to-sql-query-in-python

Configure Hugginface LLM Pipeline

Next step is to setup huggingface pipeline. If you download and want to use any model from Huggingface, pipeline setup is must. Pipeline configuration is nothing but setting up some parameters of the model. Parameters like model type (in our case “text-generation“), model name, etc. Please refer below Python code to setup HF pipeline for this project.

# Define and setup pipeline parameters
pipe_param = pipeline(
    "text-generation",
    model=alpaca_model,
    tokenizer=alpaca_tokenizer,
    max_length=450,
    top_p=0.95,
    temperature=0.4,
    repetition_penalty=1.2
)

# Configure Huggingface model pipeline
llm_pipe = HuggingFacePipeline(pipeline=pipe_param)

Configure Langchain Template Chain

Langchain is so powerful and useful Python framework to work with large language models. If you are not confident about langchain, I will suggest you to first read below langchain tutorial post.

Also Read:  Word similarity matching using Soundex algorithm in python

In this simple langchain setup, we can devide the entire code into three part. First part is to mention the template or write the propmt. In the second step, we need to connect the prompt with our LLM. Finally in the third and last step, we can just predict or return the desired output by making a function.

If you read my Langchain tutorial series, you will understand use of each step. Here I will just explain the prompt I use to generate SQL query from natural human text question.

In this prompt, I am giving command to the LLM model that I will give you a SQL table name and list of columns of that table (table metadata). Based on that information I will ask a question. You just need to convert this question into a SQL query code.

# Define Langchain template with promt
template = """
Write a SQL Query given the table name {Table} and columns as a list {Columns} for the given question :
{question}.
"""

# Configure langchain promt Template with Huggingface model
prompt = PromptTemplate(template=template, input_variables=["Table","question","Columns"])
llm_chain = LLMChain(prompt=prompt, llm=llm_pipe)

# Write a function to generate desire output (SQL query)
def get_llm_output(tble,question,cols):
    llm_chain = LLMChain(prompt=prompt,
                         llm=llm_pipe
                         )
    llm_output = llm_chain.run({"Table" : tble,"question" :question, "Columns" : cols})
    return(llm_output)

Generate and Test SQL Queries

Complex part is done. Now comes the interesing part. Let’s now test whether this prompt, model, langchain and hugging face pipeline setup is working properly or not. Is it realy generating SQL query from any natural language question text? The answer is below.

# Sample Output 1:
tble = "employee"
cols = ["id","name","date_of_birth","grade","manager_id"]
question = "Query the count of employees in grade L6 with 249053 as the manager ID"

output_command = get_llm_output(tble,question,cols)
print(output_command)
/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
  warn_deprecated(
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1473: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.4` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.95` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
  warnings.warn(
Answer: SELECT COUNT(*) FROM employee WHERE grade='L6' AND manager_id=249053;

As you can see it is correctly translating our natural text question into a SQL query code. Let’s try another one.

# Sample Output 2:
tble = "employee"
cols = ["id","name","date_of_birth","grade","manager_id"]
question = "I want to see first 10 rows of employee table"

output_command = get_llm_output(tble,question,cols)
print(output_command)
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.4` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.95` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
  warnings.warn(
The following query can be used: 
SELECT * FROM employee ORDER BY id LIMIT 10;

Working with multiple tables

Above examples are for a single table. But this is not a practical situation. In practice there can be two or multiple tables for which you may need some complex queries. Let’s try to achieve that too.

Also Read:  Train BERT from Scratch on Custom Domain Data

To do this we just need to change the prompt and input to the function. Below is the Python code for the same.

# Langchain template with prompt for complex SQL queries
template = """
Write a SQL Query given one or multiple table names in comma separated {Table} and columns as a list of list for all tables {Columns} for the given question :
{question}.
"""

prompt = PromptTemplate(template=template, input_variables=["Table","question","Columns"])

llm_pipe = HuggingFacePipeline(pipeline=pipe_param)
llm_chain = LLMChain(prompt=prompt, llm=llm_pipe)

def get_llm_output(tble,question,cols):
    llm_chain = LLMChain(prompt=prompt,
                         llm=llm_pipe
                         )
    llm_output = llm_chain.run({"Table" : tble,"question" :question, "Columns" : cols})
    return(llm_output)

Okay let’s now try whether it is working properly or not. We just need to pass column names of each tables as a list of list format.

# Let's try out
tble = "employee, student"
cols = [["emp_id","emp_name","date_of_birth","band", "manager_id"],
        ["std_id","std_name","date_of_birth", "score"]]
question = "Write query to join employee and student table based on emp_id of emplyee table and std_id from student table"

output_command = get_llm_output(tble,question,cols)
print(output_command)
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.4` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.95` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
  warnings.warn(
SELECT e.*, s.* FROM employee e INNER JOIN student s ON e.emp_id = s.std_id;

As you can see we it is correctly generating SQL query to join two tables (from our natural language casual text query) based on given ids of each table.

What Next?

So we successfully developed a langchain and huggingface pipeline to convert a natural language question text into a SQL query code. But what to do with it?

You can make an application (it can be desktop app, mobile app or web application) and connect it to a database. To start you can use Sqlite database. It is easy and simple to use in Python. If you are new to database you can read this article: SQLite with Python Tutorial for Beginners.

So that instead of showing the query code it will actually feth and show or perform oparation to any database. But before that you need to have a function or system to validate that. This is important. Because generative AI or LLMs are good but it is not that much mature you can trust blindly.

One last point, while searhing for suitable LLM model to convert normal text to SQL code, I found another good model which is specifically trained to generate code from natural language. The model name is bigcode/starcoder. This model support and generate code for 80+ programming languages.

Only problem I found is that the licencing problem of this model. I think it is not completely an opensource large language model. You need to use huggingface API to fetch this model. We can do that but I was looking for a LLM AI model which is completely free so that we can use it in our project to generate SQL queries from natural language.

If you are not sure how you can use any large language model using Huggingface API, I will suggest you to read this Langchain tutorial post. I have written a list of articles about it.

This is it for this tutorial. If you have any questions or suggestions regarding this article, please let me know in the comment section below.

Leave a comment