Azure OpenAI works good for small and static prompts and not so good on Long and dynamic prompts
I am using Azure OpenAI gpt-35-turbo with langchain’s chains and agents, runnable-parallel methods. The application is for an agricultural usecase (eg: prediction of whether to irrigate or not given the current conditions the crop is in). All sensor and weather inputs are apis. Below is the code snippet for llm call.
“””
Vector DB
“””
db = Chroma(VECTOR_DB_COLLECTION_NAME, embedding_function=embeddings,
persist_directory=”./”+VECTOR_DB_COLLECTION_NAME)
printlg(“[-] Chroma Vector DB “+VECTOR_DB_COLLECTION_NAME+” Loaded”)
printDurFrom(st, “Chroma Loaded”, “lg”)
fs = LocalFileStore(“./”+VECTOR_DB_DOCSTORE_NAME)
store = create_kv_docstore(fs)
retriever = ParentDocumentRetriever(
vectorstore=db,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
llm = AzureOpenAI(openai_api_key=return_chat_credentials()[“api-key”],
model_name=”gpt 3.5 turbo”, api_version=”2023-03-15-preview”, base_url=return_chat_credentials()[“api-url”], temperature=0.2)
“””
Tool Functions
“””
def soil_tool(input: str = “”):
“””
NPK-Moisture Extraction tool
“””
dat = fetch_last(“both”)
res = stringify(dat)
return res
def context_tool(input: str = “Best Practices for agriculture”):
“””
Vector Retrieval tool
“””
st = time.time()
context = “”
docs = retriever.invoke(input)
for doc in docs:
# Filter based on threshold score
context += doc.page_content + “n” + “—“*5 + “n”
printDurFrom(st, “Context Extracted”, ‘lg’)
return context
def weather_tool(loc: Literal[“<lat>,<long>”] = “,”):
“””
Weather Data Extraction tool
“””
st = time.time()
context = “”
lat, long = loc.split(“,”)
context = stringify(getRequiredWeatherData(lat, long), “weather”)
printDurFrom(st, “Weather API”, ‘lg’)
return “The Hourly and Daily Weather:n”+str(context)
“””
Tools
“””
context_agent = Tool(name=”Data Tool”,
description=”Use this tool to obtain agriculture related information like irrigation guidelines, fertilization guidelines, crop production guidelines and pest control mechanisms from agricultural books. Input to be passed is ‘<crop-name> <context-that-is-needed>’.”,
func=context_tool)
soil_agent = Tool(name=”Soil Tool”,
description=”Use this agent to get current soil condition data. Input should be ‘both’.”,
func=soil_tool)
weather_agent = Tool(name=”Weather Tool”,
description=”Use this agent to get current weather condition data. Input should be <lat>,<long>.”,
func=weather_tool)
def createAgent(agentName: Literal[“irrigation”, “pesticide”, “fertilization”], agentSpecificPromptPart: str):
“””
Creates KPI specific Agents
“””
st = time.time()
tools = [weather_agent, soil_agent, context_agent]
agent_chain = initialize_agent(
tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False, max_iterations=15, max_new_tokens=4000, handle_parsing_errors=True
)
printDurFrom(st, agentName.capitalize()+” Agent Initialized”, “lg”)
sysmsg = SystemMessage(content=”You are an Agricultural Assitant. You have access to tools that have all the contextual information and the live data like weather and soil nutrition values you need. Humans don’t have any knowledge about the tools you have access to and you dont have to tell them.”)
hummsg = HumanMessage(content=f”””
Here are the basic information about the crop i’ve sown and the agricultural land you need,
CropName: {Farmer.CROP_TYPE}
Variety: {Farmer.VARIETY}
SoilType: {Farmer.SOIL_TYPE}
Location: {Farmer.LOCATION}
Lat: {Farmer.LAT}
Long: {Farmer.LONG}
SownIn: {Farmer.SOWN_IN}
Phase: {Farmer.STAGE}
TodayDate: {datetime.now().strftime(“%d-%m-%Y”)} [dd-mm-yyyy]
{agentSpecificPromptPart}
Only answer to the given question with its short reason. Do not leave sentences incomplete.
“””)
prompt = ChatPromptTemplate.from_messages([
sysmsg,
hummsg,
])
ret = None
try:
ret = agent_chain.ainvoke, prompt
except:
print(“[!] Error in AgentChain”)
return ret
def chainCall(mode: Literal[“irrigation”, “pesticide”, “fertilization”]):
“””
Creates and returns the specific chains
“””
if mode == “irrigation”:
prmpt = “””
Consider “precipitationProbability” from weather tool and “moisture” from MoistureSensor of soil tool. And answer the following questions.
– Is this the right time to irrigate?
“””
return createAgent(mode, prmpt)
elif mode == “pesticide”:
prmpt = “””
Consider “precipitationProbability” and “windSpeed” from weather tool and “moisture” from MoistureSensor of soil tool. And answer the following questions.
– Is this the right time to spray pesticides?
– Which pesticide to use?
– How much to spray?
“””
return createAgent(mode, prmpt)
elif mode == “fertilization”:
prmpt = “””
Consider “precipitationProbability” and “windSpeed” from weather tool and “moisture” from MoistureSensor and “nitrogen”, “phosporous” and “potassium” from soilProbeSensor of soil tool. And answer the following questions.
– Is this the right time to spray fertilizer?
– Which fertilizer to use?
– How much to spray?
“””
return createAgent(mode, prmpt)
else:
raise ValueError(“Invalid ‘mode'”)
def lambdaFuncGen(model, prompt: str):
“””
Lambda Function Generator for Main Chain
“””
async def lambdaFunc(dummy: None):
“””
Lambda Function that returns result of Agent Chain
“””
return await model(prompt)
return lambdaFunc
async def mainCall(modes: list):
“””
Creates and invokes Main Chain asynchronously
“””
chainDic = {}
prmptDic = {}
for mode in modes:
model, prmpt = chainCall(mode)
chainDic[mode] = RunnableLambda(lambdaFuncGen(model, prmpt))
prmptDic[mode] = prmpt.format()
parallelChain = RunnableParallel(chainDic)
results = await parallelChain.ainvoke(“”)
return results
This code gives very idiotic responses. like “Yes, irrigation must be done. The soil moisture level is 97.5436% and heavy rain is predicted for the next few hours.” or “No, irrigation should not be done. The soil moisture level is 10% which is below required moisture level and the no rain is predicted.”. Out of 10 responses for same inputs only 2 or 3 comes out to be acceptable (sometimes all 10 go wrong).
But when a small prompt with all static duplicate data is passed to the llm through a simple LLMChain the response is very verbose making it feel like quite a general response but it has the correct answers within. Like every single time! ( Really sorry I’m unable to post this code snippet now, I will probably in a future edit or below in the thread as soon as I get access to it ).
My usecase cant be done statically in that simple format. What am I doing wrong here? Is it the prompt? Or Is it the approach? Or Is it the LLM model or its version? ( Then why did the simple prompt did the job :confused: ). It would be really appreciable if someone help me sort this issue out. I’m a rookie here ( both to community and gen ai ). Even small point outs in the right direction will mean a lot. Thanx!!!
I am using Azure OpenAI gpt-35-turbo with langchain’s chains and agents, runnable-parallel methods. The application is for an agricultural usecase (eg: prediction of whether to irrigate or not given the current conditions the crop is in). All sensor and weather inputs are apis. Below is the code snippet for llm call.”””
Vector DB
“””
db = Chroma(VECTOR_DB_COLLECTION_NAME, embedding_function=embeddings,
persist_directory=”./”+VECTOR_DB_COLLECTION_NAME)
printlg(“[-] Chroma Vector DB “+VECTOR_DB_COLLECTION_NAME+” Loaded”)
printDurFrom(st, “Chroma Loaded”, “lg”)
fs = LocalFileStore(“./”+VECTOR_DB_DOCSTORE_NAME)
store = create_kv_docstore(fs)
retriever = ParentDocumentRetriever(
vectorstore=db,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
llm = AzureOpenAI(openai_api_key=return_chat_credentials()[“api-key”],
model_name=”gpt 3.5 turbo”, api_version=”2023-03-15-preview”, base_url=return_chat_credentials()[“api-url”], temperature=0.2)
“””
Tool Functions
“””
def soil_tool(input: str = “”):
“””
NPK-Moisture Extraction tool
“””
dat = fetch_last(“both”)
res = stringify(dat)
return res
def context_tool(input: str = “Best Practices for agriculture”):
“””
Vector Retrieval tool
“””
st = time.time()
context = “”
docs = retriever.invoke(input)
for doc in docs:
# Filter based on threshold score
context += doc.page_content + “n” + “—“*5 + “n”
printDurFrom(st, “Context Extracted”, ‘lg’)
return context
def weather_tool(loc: Literal[“<lat>,<long>”] = “,”):
“””
Weather Data Extraction tool
“””
st = time.time()
context = “”
lat, long = loc.split(“,”)
context = stringify(getRequiredWeatherData(lat, long), “weather”)
printDurFrom(st, “Weather API”, ‘lg’)
return “The Hourly and Daily Weather:n”+str(context)
“””
Tools
“””
context_agent = Tool(name=”Data Tool”,
description=”Use this tool to obtain agriculture related information like irrigation guidelines, fertilization guidelines, crop production guidelines and pest control mechanisms from agricultural books. Input to be passed is ‘<crop-name> <context-that-is-needed>’.”,
func=context_tool)
soil_agent = Tool(name=”Soil Tool”,
description=”Use this agent to get current soil condition data. Input should be ‘both’.”,
func=soil_tool)
weather_agent = Tool(name=”Weather Tool”,
description=”Use this agent to get current weather condition data. Input should be <lat>,<long>.”,
func=weather_tool)
def createAgent(agentName: Literal[“irrigation”, “pesticide”, “fertilization”], agentSpecificPromptPart: str):
“””
Creates KPI specific Agents
“””
st = time.time()
tools = [weather_agent, soil_agent, context_agent]
agent_chain = initialize_agent(
tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False, max_iterations=15, max_new_tokens=4000, handle_parsing_errors=True
)
printDurFrom(st, agentName.capitalize()+” Agent Initialized”, “lg”)
sysmsg = SystemMessage(content=”You are an Agricultural Assitant. You have access to tools that have all the contextual information and the live data like weather and soil nutrition values you need. Humans don’t have any knowledge about the tools you have access to and you dont have to tell them.”)
hummsg = HumanMessage(content=f”””
Here are the basic information about the crop i’ve sown and the agricultural land you need,
CropName: {Farmer.CROP_TYPE}
Variety: {Farmer.VARIETY}
SoilType: {Farmer.SOIL_TYPE}
Location: {Farmer.LOCATION}
Lat: {Farmer.LAT}
Long: {Farmer.LONG}
SownIn: {Farmer.SOWN_IN}
Phase: {Farmer.STAGE}
TodayDate: {datetime.now().strftime(“%d-%m-%Y”)} [dd-mm-yyyy]
{agentSpecificPromptPart}
Only answer to the given question with its short reason. Do not leave sentences incomplete.
“””)
prompt = ChatPromptTemplate.from_messages([
sysmsg,
hummsg,
])
ret = None
try:
ret = agent_chain.ainvoke, prompt
except:
print(“[!] Error in AgentChain”)
return ret
def chainCall(mode: Literal[“irrigation”, “pesticide”, “fertilization”]):
“””
Creates and returns the specific chains
“””
if mode == “irrigation”:
prmpt = “””
Consider “precipitationProbability” from weather tool and “moisture” from MoistureSensor of soil tool. And answer the following questions.
– Is this the right time to irrigate?
“””
return createAgent(mode, prmpt)
elif mode == “pesticide”:
prmpt = “””
Consider “precipitationProbability” and “windSpeed” from weather tool and “moisture” from MoistureSensor of soil tool. And answer the following questions.
– Is this the right time to spray pesticides?
– Which pesticide to use?
– How much to spray?
“””
return createAgent(mode, prmpt)
elif mode == “fertilization”:
prmpt = “””
Consider “precipitationProbability” and “windSpeed” from weather tool and “moisture” from MoistureSensor and “nitrogen”, “phosporous” and “potassium” from soilProbeSensor of soil tool. And answer the following questions.
– Is this the right time to spray fertilizer?
– Which fertilizer to use?
– How much to spray?
“””
return createAgent(mode, prmpt)
else:
raise ValueError(“Invalid ‘mode'”)
def lambdaFuncGen(model, prompt: str):
“””
Lambda Function Generator for Main Chain
“””
async def lambdaFunc(dummy: None):
“””
Lambda Function that returns result of Agent Chain
“””
return await model(prompt)
return lambdaFunc
async def mainCall(modes: list):
“””
Creates and invokes Main Chain asynchronously
“””
chainDic = {}
prmptDic = {}
for mode in modes:
model, prmpt = chainCall(mode)
chainDic[mode] = RunnableLambda(lambdaFuncGen(model, prmpt))
prmptDic[mode] = prmpt.format()
parallelChain = RunnableParallel(chainDic)
results = await parallelChain.ainvoke(“”)
return resultsThis code gives very idiotic responses. like “Yes, irrigation must be done. The soil moisture level is 97.5436% and heavy rain is predicted for the next few hours.” or “No, irrigation should not be done. The soil moisture level is 10% which is below required moisture level and the no rain is predicted.”. Out of 10 responses for same inputs only 2 or 3 comes out to be acceptable (sometimes all 10 go wrong).But when a small prompt with all static duplicate data is passed to the llm through a simple LLMChain the response is very verbose making it feel like quite a general response but it has the correct answers within. Like every single time! ( Really sorry I’m unable to post this code snippet now, I will probably in a future edit or below in the thread as soon as I get access to it ).My usecase cant be done statically in that simple format. What am I doing wrong here? Is it the prompt? Or Is it the approach? Or Is it the LLM model or its version? ( Then why did the simple prompt did the job :confused: ). It would be really appreciable if someone help me sort this issue out. I’m a rookie here ( both to community and gen ai ). Even small point outs in the right direction will mean a lot. Thanx!!! Read More