Connect your LLM to the Internet with Microsoft AutoGen.

Gathnex
6 min readOct 12, 2023

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

Previously, we all used autonomous agent frameworks like Auto GPT, Baby AGI, and Langchain agent, each had its own advantages and disadvantages. However, Microsoft has developed Autogen, incorporating all the advantages of these LLM agent frameworks and also has some disadvantages.

  • AutoGen enables building next-gen LLM applications based on multi-agent conversations with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
  • It supports diverse conversation patterns for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy, the number of agents, and agent conversation topology.
  • It provides a collection of working systems with different complexities. These systems span a wide range of applications from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
  • AutoGen provides a drop-in replacement of openai.Completion or openai.ChatCompletion as an enhanced inference API. It allows easy performance tuning, utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.

Let’s see how to connect the LLM with web search ability.

!pip install -q pyautogen~=0.1.0 docker openai 

We used gpt-3.5-turbo for this example

import autogen
#Follow the same format for model and api arguments.
config_list = [
{
'model': 'gpt-3.5-turbo',
'api_key': 'OpenAI API key'
}
]
llm_config={
"request_timeout": 600,
"seed": 42,
"config_list": config_list,
"temperature": 0,
}

In this demo we are going to use 2 agents which is AssistantAgent and UserProxyAgent.

Userproxyagent is conceptually a proxy agent for humans, soliciting human input as the agent’s reply at each interaction turn by default and also having the capability to execute code and call functions. The UserProxyAgent triggers code execution automatically when it detects an executable code block in the received message and no human user input is provided.

Assitantagent is designed to act as an AI assistant, using LLMs by default but not requiring human input or code execution. It could write Python code (in a Python coding block) for a user to execute when a message (typically a description of a task that needs to be solved) is received. Under the hood, the Python code is written by LLM (e.g., GPT-4). It can also receive the execution results and suggest corrections or bug fixes. Its behaviour can be altered by passing a new system message.

In simple terms, both agents have an iterative conversation until they achieve the best output.

  1. The assistant receives a message from the user_proxy, which contains the task description.
  2. The assistant then tries to write Python code to solve the task and sends the response to the user_proxy.
  3. Once the user_proxy receives a response from the assistant, it tries to reply by either soliciting human input or preparing an automatically generated reply. If no human input is provided, the user_proxy executes the code and uses the result as the auto-reply.
  4. The assistant then generates a further response for the user_proxy. The user_proxy can then decide whether to terminate the conversation. If not, steps 3 and 4 are repeated.

About Arguments:

  1. Name: The Agent has a name, so you know who you’re talking to.
  2. Termination Message: It can recognize when a conversation should stop based on specific messages it receives.
  3. Auto Replies: The program can reply automatically a certain number of times in a row without needing your input.
  4. Human Input Modes:
  • Always: It will ask for your input every time.
  • Terminate: It asks for your input when a special message is received or after replying a set number of times.
  • Never: It never asks for your input and replies automatically.

5. Function Map: It knows different functions (tasks) it can do and matches them to the right commands.

6. Code Execution: It can run code (computer instructions) in a safe way. You can control where it runs and how long it takes.

7. Default Reply: If it doesn’t know what to do, it has a default message to reply to you.

8. LLM-Based Auto Reply: It can use a special technique called LLM to generate replies based on what you say.

9. System Message: It has a special message to help it understand how to respond using LLM.

# create an AssistantAgent instance named "assistant"
assistant = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
)
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={"work_dir": "web"},
llm_config=llm_config,
system_message="""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)

Execution :

# the assistant receives a message from the user, which contains the task description
user_proxy.initiate_chat(
assistant,
message="""
what is current time in Akureyri,Iceland ?
"""
)
#Output

user_proxy (to assistant):


what is current time in Akureyri,Iceland ?


--------------------------------------------------------------------------------
assistant (to user_proxy):

To get the current time in Akureyri, Iceland, we can use the Python `datetime` module. Here's the code:

```python
from datetime import datetime
import pytz

# Set the timezone to Akureyri
akureyri_timezone = pytz.timezone('Atlantic/Reykjavik')

# Get the current time in Akureyri
current_time = datetime.now(akureyri_timezone)

# Format the time as a string
current_time_str = current_time.strftime('%Y-%m-%d %H:%M:%S')

# Print the current time in Akureyri
print("The current time in Akureyri, Iceland is:", current_time_str)
```

Please note that the code uses the `pytz` library to set the timezone to 'Atlantic/Reykjavik', which is the timezone for Akureyri, Iceland. Make sure you have the `pytz` library installed before running the code.

Let me know if you need any further assistance!

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...

>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
user_proxy (to assistant):

exitcode: 0 (execution succeeded)
Code output:
The current time in Akureyri, Iceland is: 2023-10-12 06:30:07


--------------------------------------------------------------------------------
assistant (to user_proxy):

Great! The current time in Akureyri, Iceland is 2023-10-12 06:30:07.

Is there anything else I can help you with?

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
user_proxy (to assistant):

No, thank you! The task has been solved to full satisfaction.

--------------------------------------------------------------------------------
assistant (to user_proxy):

You're welcome! I'm glad I could help. If you have any more questions in the future, feel free to ask. Have a great day!

TERMINATE

--------------------------------------------------------------------------------
Please give feedback to assistant. Press enter or type 'exit' to stop the conversation: exit

Refer Google colab for another example

The drawback of Autogen is that it is in the initial stages of development, and sometimes you might not receive a proper response. During web searches, it occasionally provides the desired result, but not always. However, making some modifications in the source code can lead to better results. If you are interested in the code, please respond to this blog, and we will create another post on that topic.

Note: If you use the free version of the OpenAI API, you may encounter token and requests limitation errors , especially when the conversation between two agents becomes lengthy. During such instances, you might experience token limit issues. Even Turbo models do not possess the same level of intelligence as GPT-4 during code generation. We recommend utilizing long-context window models like gpt-4, or gpt-4 long-context window models to get a satisfying output.

There are many more types of agents that can be applied in various fields. Please refer to the following link for more information.

Referene :

  1. https://microsoft.github.io/autogen/
  2. https://github.com/microsoft/autogen

--

--

Gathnex

🤖 Exploring Generative AI & LLM. Join the Gathnex community for cutting-edge discussions and updates! LinkedIn : https://www.linkedin.com/company/gathnex/ 🌟