Wednesday, March 27, 2024

Creating Artificial Person Analysis: Persona Prompting & Autonomous Brokers

Must read


The method begins with scaffolding the autonomous brokers utilizing Autogen, a instrument that simplifies the creation and orchestration of those digital personas. We will set up the autogen pypi bundle utilizing py

pip set up pyautogen

Format the output (elective)— That is to make sure phrase wrap for readability relying in your IDE equivalent to when utilizing Google Collab to run your pocket book for this train.

from IPython.show import HTML, show

def set_css():
show(HTML('''
<fashion>
pre {
white-space: pre-wrap;
}
</fashion>
'''))
get_ipython().occasions.register('pre_run_cell', set_css)

Now we go forward and get our surroundings setup by importing the packages and establishing the Autogen configuration — together with our LLM (Giant Language Mannequin) and API keys. You need to use different native LLM’s utilizing providers that are backwards appropriate with OpenAI REST service — LocalAI is a service that may act as a gateway to your domestically working open-source LLMs.

I’ve examined this each on GPT3.5 gpt-3.5-turbo and GPT4 gpt-4-turbo-preview from OpenAI. You have to to contemplate deeper responses from GPT4 nevertheless longer question time.

import json
import os
import autogen
from autogen import GroupChat, Agent
from typing import Optionally available

# Setup LLM mannequin and API keys
os.environ["OAI_CONFIG_LIST"] = json.dumps([
{
'model': 'gpt-3.5-turbo',
'api_key': '<<Put your Open-AI Key here>>',
}
])

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
"mannequin": {
"gpt-3.5-turbo"
}
}
)

We then have to configure our LLM occasion — which we’ll tie to every of the brokers. This permits us if required to generate distinctive LLM configurations per agent, i.e. if we needed to make use of totally different fashions for various brokers.

# Outline the LLM configuration settings
llm_config = {
# Seed for constant output, used for testing. Take away in manufacturing.
# "seed": 42,
"cache_seed": None,
# Setting cache_seed = None guarantee's caching is disabled
"temperature": 0.5,
"config_list": config_list,
}

Defining our researcher — That is the persona that can facilitate the session on this simulated person analysis situation. The system immediate used for that persona features a few key issues:

  • Objective: Your function is to ask questions on merchandise and collect insights from particular person clients like Emily.
  • Grounding the simulation: Earlier than you begin the duty breakdown the listing of panelists and the order you need them to talk, keep away from the panelists talking with one another and creating affirmation bias.
  • Ending the simulation: As soon as the dialog is ended and the analysis is accomplished please finish your message with `TERMINATE` to finish the analysis session, that is generated from the generate_notice perform which is used to align system prompts for varied brokers. Additionally, you will discover the researcher agent has the is_termination_msg set to honor the termination.

We additionally add the llm_config which is used to tie this again to the language mannequin configuration with the mannequin model, keys and hyper-parameters to make use of. We are going to use the identical config with all our brokers.

# Keep away from brokers thanking one another and ending up in a loop
# Helper agent for the system prompts
def generate_notice(function="researcher"):
# Base discover for everybody, add your personal further prompts right here
base_notice = (
'nn'
)

# Discover for non-personas (supervisor or researcher)
non_persona_notice = (
'Don't present appreciation in your responses, say solely what is important. '
'if "Thanks" or "You are welcome" are stated within the dialog, then say TERMINATE '
'to point the dialog is completed and that is your final message.'
)

# Customized discover for personas
persona_notice = (
' Act as {function} when responding to queries, offering suggestions, requested to your private opinion '
'or taking part in discussions.'
)

# Verify if the function is "researcher"
if function.decrease() in ["manager", "researcher"]:
# Return the complete termination discover for non-personas
return base_notice + non_persona_notice
else:
# Return the modified discover for personas
return base_notice + persona_notice.format(function=function)

# Researcher agent definition
identify = "Researcher"
researcher = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Researcher. You're a high product reasearcher with a Phd in behavioural psychology and have labored within the analysis and insights business for the final 20 years with high artistic, media and enterprise consultancies. Your function is to ask questions on merchandise and collect insights from particular person clients like Emily. Body inquiries to uncover buyer preferences, challenges, and suggestions. Earlier than you begin the duty breakdown the listing of panelists and the order you need them to talk, keep away from the panelists talking with one another and creating comfirmation bias. If the session is terminating on the finish, please present a abstract of the outcomes of the reasearch research in clear concise notes not at first.""" + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content material") else False,
)

Outline our people — to place into the analysis, borrowing from the earlier course of we will use the persona’s generated. I’ve manually adjusted the prompts for this text to take away references to the most important grocery store model that was used for this simulation.

I’ve additionally included a “Act as Emily when responding to queries, offering suggestions, or taking part in discussions.” fashion immediate on the finish of every system immediate to make sure the artificial persona’s keep on process which is being generated from the generate_notice perform.

# Emily - Buyer Persona
identify = "Emily"
emily = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Emily. You're a 35-year-old elementary faculty trainer residing in Sydney, Australia. You're married with two children aged 8 and 5, and you've got an annual earnings of AUD 75,000. You're introverted, excessive in conscientiousness, low in neuroticism, and luxuriate in routine. When procuring on the grocery store, you favor natural and domestically sourced produce. You worth comfort and use a web-based procuring platform. As a consequence of your restricted time from work and household commitments, you search fast and nutritious meal planning options. Your targets are to purchase high-quality produce inside your funds and to seek out new recipe inspiration. You're a frequent shopper and use loyalty packages. Your most popular strategies of communication are e-mail and cellular app notifications. You have got been procuring at a grocery store for over 10 years but in addition price-compare with others.""" + generate_notice(identify),
)

# John - Buyer Persona
identify="John"
john = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""John. You're a 28-year-old software program developer based mostly in Sydney, Australia. You're single and have an annual earnings of AUD 100,000. You are extroverted, tech-savvy, and have a excessive stage of openness. When procuring on the grocery store, you primarily purchase snacks and ready-made meals, and you employ the cellular app for fast pickups. Your primary targets are fast and handy procuring experiences. You sometimes store on the grocery store and will not be a part of any loyalty program. You additionally store at Aldi for reductions. Your most popular technique of communication is in-app notifications.""" + generate_notice(identify),
)

# Sarah - Buyer Persona
identify="Sarah"
sarah = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Sarah. You're a 45-year-old freelance journalist residing in Sydney, Australia. You're divorced with no children and earn AUD 60,000 per yr. You're introverted, excessive in neuroticism, and really health-conscious. When procuring on the grocery store, you search for natural produce, non-GMO, and gluten-free gadgets. You have got a restricted funds and particular dietary restrictions. You're a frequent shopper and use loyalty packages. Your most popular technique of communication is e-mail newsletters. You completely store for groceries.""" + generate_notice(identify),
)

# Tim - Buyer Persona
identify="Tim"
tim = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Tim. You're a 62-year-old retired police officer residing in Sydney, Australia. You're married and a grandparent of three. Your annual earnings comes from a pension and is AUD 40,000. You're extremely conscientious, low in openness, and like routine. You purchase staples like bread, milk, and canned items in bulk. As a consequence of mobility points, you want help with heavy gadgets. You're a frequent shopper and are a part of the senior citizen low cost program. Your most popular technique of communication is junk mail flyers. You have got been procuring right here for over 20 years.""" + generate_notice(identify),
)

# Lisa - Buyer Persona
identify="Lisa"
lisa = autogen.AssistantAgent(
identify=identify,
llm_config=llm_config,
system_message="""Lisa. You're a 21-year-old college scholar residing in Sydney, Australia. You're single and work part-time, incomes AUD 20,000 per yr. You're extremely extroverted, low in conscientiousness, and worth social interactions. You store right here for widespread manufacturers, snacks, and alcoholic drinks, principally for social occasions. You have got a restricted funds and are all the time searching for gross sales and reductions. You aren't a frequent shopper however are all for becoming a member of a loyalty program. Your most popular technique of communication is social media and SMS. You store wherever there are gross sales or promotions.""" + generate_notice(identify),
)

Outline the simulated surroundings and guidelines for who can communicate — We’re permitting all of the brokers we have now outlined to sit down throughout the similar simulated surroundings (group chat). We will create extra advanced eventualities the place we will set how and when subsequent audio system are chosen and outlined so we have now a easy perform outlined for speaker choice tied to the group chat which can make the researcher the lead and guarantee we go around the room to ask everybody just a few occasions for his or her ideas.

# def custom_speaker_selection(last_speaker, group_chat):
# """
# Customized perform to pick which agent speaks subsequent within the group chat.
# """
# # Checklist of brokers excluding the final speaker
# next_candidates = [agent for agent in group_chat.agents if agent.name != last_speaker.name]

# # Choose the subsequent agent based mostly in your customized logic
# # For simplicity, we're simply rotating by means of the candidates right here
# next_speaker = next_candidates[0] if next_candidates else None

# return next_speaker

def custom_speaker_selection(last_speaker: Optionally available[Agent], group_chat: GroupChat) -> Optionally available[Agent]:
"""
Customized perform to make sure the Researcher interacts with every participant 2-3 occasions.
Alternates between the Researcher and members, monitoring interactions.
"""
# Outline members and initialize or replace their interplay counters
if not hasattr(group_chat, 'interaction_counters'):
group_chat.interaction_counters = {agent.identify: 0 for agent in group_chat.brokers if agent.identify != "Researcher"}

# Outline a most variety of interactions per participant
max_interactions = 6

# If the final speaker was the Researcher, discover the subsequent participant who has spoken the least
if last_speaker and last_speaker.identify == "Researcher":
next_participant = min(group_chat.interaction_counters, key=group_chat.interaction_counters.get)
if group_chat.interaction_counters[next_participant] < max_interactions:
group_chat.interaction_counters[next_participant] += 1
return subsequent((agent for agent in group_chat.brokers if agent.identify == next_participant), None)
else:
return None # Finish the dialog if all members have reached the utmost interactions
else:
# If the final speaker was a participant, return the Researcher for the subsequent flip
return subsequent((agent for agent in group_chat.brokers if agent.identify == "Researcher"), None)

# Including the Researcher and Buyer Persona brokers to the group chat
groupchat = autogen.GroupChat(
brokers=[researcher, emily, john, sarah, tim, lisa],
speaker_selection_method = custom_speaker_selection,
messages=[],
max_round=30
)

Outline the supervisor to cross directions into and handle our simulation — After we begin issues off we’ll communicate solely to the supervisor who will communicate to the researcher and panelists. This makes use of one thing known as GroupChatManager in Autogen.

# Initialise the supervisor
supervisor = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=llm_config,
system_message="You're a reasearch supervisor agent that may handle a bunch chat of a number of brokers made up of a reasearcher agent and many individuals made up of a panel. You'll restrict the dialogue between the panelists and assist the researcher in asking the questions. Please ask the researcher first on how they need to conduct the panel." + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content material") else False,
)

We set the human interplay — permitting us to cross directions to the varied brokers we have now began. We give it the preliminary immediate and we will begin issues off.

# create a UserProxyAgent occasion named "user_proxy"
user_proxy = autogen.UserProxyAgent(
identify="user_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
system_message="A human admin.",
human_input_mode="TERMINATE"
)
# begin the reasearch simulation by giving instruction to the supervisor
# supervisor <-> reasearcher <-> panelists
user_proxy.initiate_chat(
supervisor,
message="""
Collect buyer insights on a grocery store grocery supply providers. Establish ache factors, preferences, and strategies for enchancment from totally different buyer personas. May you all please give your personal private oponions earlier than sharing extra with the group and discussing. As a reasearcher your job is to make sure that you collect unbiased info from the members and supply a abstract of the outcomes of this research again to the tremendous market model.
""",
)

As soon as we run the above we get the output obtainable dwell inside your python surroundings, you will notice the messages being handed round between the varied brokers.

Reside python output — Our researcher speaking to panelists

Now that our simulated analysis research has been concluded we might like to get some extra actionable insights. We will create a abstract agent to assist us with this process and in addition use this in a Q&A situation. Right here simply watch out of very giant transcripts would wish a language mannequin that helps a bigger enter (context window).

We’d like seize all of the conversations — in our simulated panel dialogue from earlier to make use of because the person immediate (enter) to our abstract agent.

# Get response from the groupchat for person immediate
messages = [msg["content"] for msg in groupchat.messages]
user_prompt = "Right here is the transcript of the research ```{customer_insights}```".format(customer_insights="n>>>n".be part of(messages))

Lets craft the system immediate (directions) for our abstract agent — This agent will concentrate on creating us a tailor-made report card from the earlier transcripts and provides us clear strategies and actions.

# Generate system immediate for the abstract agent
summary_prompt = """
You're an knowledgeable reasearcher in behaviour science and are tasked with summarising a reasearch panel. Please present a structured abstract of the important thing findings, together with ache factors, preferences, and strategies for enchancment.
This must be within the format based mostly on the next format:

```
Reasearch Examine: <<Title>>

Topics:
<<Overview of the themes and quantity, another key info>>

Abstract:
<<Abstract of the research, embrace detailed evaluation as an export>>

Ache Factors:
- <<Checklist of Ache Factors - Be as clear and prescriptive as required. I anticipate detailed response that can be utilized by the model on to make modifications. Give a brief paragraph per ache level.>>

Ideas/Actions:
- <<Checklist of Adctions - Be as clear and prescriptive as required. I anticipate detailed response that can be utilized by the model on to make modifications. Give a brief paragraph per reccomendation.>>
```
"""

Outline the abstract agent and its surroundings — Lets create a mini surroundings for the abstract agent to run. It will want it’s personal proxy (surroundings) and the provoke command which can pull the transcripts (user_prompt) because the enter.

summary_agent = autogen.AssistantAgent(
identify="SummaryAgent",
llm_config=llm_config,
system_message=summary_prompt + generate_notice(),
)
summary_proxy = autogen.UserProxyAgent(
identify="summary_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
system_message="A human admin.",
human_input_mode="TERMINATE"
)
summary_proxy.initiate_chat(
summary_agent,
message=user_prompt,
)

This provides us an output within the type of a report card in Markdown, together with the flexibility to ask additional questions in a Q&A method chat-bot on-top of the findings.

Reside output of a report card from Abstract Agent adopted by open Q&A



Supply hyperlink

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article