📰 Real-Time Retrieval with LLMs and Tools¶
Introduction¶
In this notebook, we’ll explore how to combine Large Language Models (LLMs) like GPT-4o with real-time search tools to answer questions about the latest news.
This approach—Retrieval-Augmented Generation (RAG)—enhances LLMs’ knowledge by integrating live, external data.
📚 What We’ll Do¶
- 🔍 Compare answers from:
- 🧠 LLM-Only (no external data),
- 🌐 LLM + Tools (using a search engine to get the latest headlines).
- 🔎 Observe how real-time search adds relevance, accuracy, and specificity to LLM outputs.
🛠️ Tools¶
We’ll use:
- GPT-4o (to generate summaries and synthesize information),
search
tool (to retrieve real-time headlines),- (optional) other tools like Wikipedia summarization for additional context.
🎯 Use Case: News Headlines¶
We’ll focus on a real-world application:
🔴 Generating a weekly global news digest
This shows how an LLM’s output can change dramatically depending on whether it uses static knowledge or live data.
🧪 Key Objectives¶
1️⃣ See how tools boost LLM answers in real-world news tracking.
2️⃣ Compare specificity, accuracy, and freshness of LLM-Only vs. LLM+Tools answers.
3️⃣ Explore how this impacts downstream tasks like:
- Summarization
- Decision-making
- Research
📝 LLM-Only Headline Generation (Function Calling)¶
Let’s first use the LLM alone to generate “current headlines” using function calling.
The key here is to see how the LLM tries to “imagine” headlines without access to real-time data (⚠️ usually based on its training data up to 2023-2024).
We’ll define a simple function schema and call it via GPT-4o to mimic real-time headline generation.
🧩 Function Schema: generate_headlines
¶
To make the process of retrieving and verifying real-time news headlines more structured, we define a function schema for the LLM. This schema explicitly tells the model:
- What the function is called.
- What data structure (JSON) it should return.
Here’s what each field means:
Field | Type | Description |
---|---|---|
headlines |
array |
List of top news headlines. These should be clear, concise, and fact-based. |
newspapers |
array |
List of news outlets or media organizations from which the headlines are sourced. |
sources |
array |
List of URLs (web links) for each headline, ensuring transparency and traceability. |
highlights |
array |
Three-sentence summary that synthesizes the main points or themes from the headlines. |
This structured approach ensures:
- The LLM’s output is consistent and verifiable.
- We can directly compare and evaluate the LLM’s performance.
- Each function call becomes a modular, reusable component for future pipelines.
Let’s now see how we invoke this function within the LLM API call — and then move on to how tools (like Google Search) can complement and improve the factual accuracy of the generated headlines!
from openai import OpenAI
import json
# Initialize the client
client = OpenAI()
function_schema = [
{
"name": "generate_headlines",
"description": "Generate top headlines of the week",
"parameters": {
"type": "object",
"properties": {
"headlines": {
"type": "array",
"items": {"type": "string"},
"description": "List of top news headlines"
},
"newspapers": {
"type": "array",
"items": {"type": "string"},
"description": "List of news papers where the headlines were taken from"
},
"sources": {
"type": "array",
"items": {"type": "string"},
"description": "List of URLs the headlines"
},
"highlights": {
"type": "array",
"items": {"type": "string"},
"description": "three sentences that summarize the headlines"
}
},
"required": ["headlines", "sources"]
}
}
]
# Make a function call to generate headlines
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant that generates news headlines."},
{"role": "user", "content": "Make me a list of the top 5 headlines of the week with background context"}
],
functions=function_schema,
function_call={"name": "generate_headlines"}
)
# Extract the LLM’s generated headlines
headlines = response.choices[0].message.function_call.arguments
headlines_json = json.loads(headlines)
print("📰 LLM-Generated Headlines (no real-time data):")
headlines = headlines_json.get("headlines", [])
sources = headlines_json.get("sources", [])
newspapers = headlines_json.get("newspapers", [])
highlights = headlines_json.get("highlights", [])
for i, headline in enumerate(headlines):
print(f"{i+1}. {headline}")
try:
print(f"Source: {sources[i]}")
except:
print("Source: None")
try:
print(f"Newspaper: {newspapers[i]}")
except:
print("Newspaper: None")
try:
print(f"Highlight: {highlights[i]}")
except:
print("Highlight: None")
print("--------------------------------")
📰 LLM-Generated Headlines (no real-time data): 1. Governments Urge Public Calm Amid Growing Middle-East Tensions Source: http://newswebsite.com/middle-east-tensions Newspaper: None Highlight: Tensions continue to rise in the Middle East as governments call for calm and diplomatic solutions. -------------------------------- 2. Major Advances in AI Technology Announced by Leading Tech Firms Source: http://technews.com/ai-advances Newspaper: None Highlight: Leading technology firms unveil significant AI innovations, promising cutting-edge applications across industries. -------------------------------- 3. Historic Climate Agreement Reached at Global Summit Source: http://climatesummit.org/agreement Newspaper: None Highlight: A landmark climate agreement is reached at the global summit, signaling hope for environmental action. -------------------------------- 4. Unexpected Surge in Global Markets Leaves Analysts Stunned Source: http://financenews.com/global-markets-surge Newspaper: None Highlight: None -------------------------------- 5. Breakthrough in Cancer Research Offers New Hope to Patients Source: http://medicalbreakthroughs.com/cancer-research Newspaper: None Highlight: None --------------------------------
Main Observations¶
✅ General Plausibility
The headlines generated sound very plausible and realistic! They are general and cover common themes:
- Economy/Markets
- International Peace
- Medical breakthroughs
- Climate action
- Tech innovation
✅ Diversity
There’s a good variety in topics: economy, global politics, health, climate, and technology — covering typical news beats.
✅ Formatting & Structure
The LLM even added:
- Source URLs (likely hallucinated, not real!)
- Newspaper names (missing in this case, but often seen)
- Highlight summaries
This looks like real news output — very convincing on first glance.
Main Limitations¶
⚠️ Key Limitations
🔴 Not Real-Time
These headlines are not grounded in actual real-world data. They’re “imagined” by the LLM based on its training data (cutoff in 2023-2024).
🔴 Hallucination
- The sources/URLs are hallucinated – not real links.
- No real verification – it’s a guess!
🔴 Potential Mismatch
If we ask for current or real headlines, the LLM’s output doesn’t meet the user’s real needs.
📝 Conclusion¶
This is a great teachable moment:
- Even with function calling, LLMs can produce convincing and structured content.
- But without real-time search, they cannot provide grounded, current information.
- Next step: 🔎 Show how using a search engine tool (like Google Search API) can fix this!
Let’s proceed to integrating Google Search API to retrieve real headlines and compare the outputs! 🚀
🔎 Real-Time Information Retrieval: Google Search & Wikipedia APIs¶
In this section, we’ll bridge the gap left by LLMs alone by using external APIs to gather real, up-to-date information!
🌐 1️⃣ DuckDuckGo-Search¶
🔍 What is DuckDuckGo-Search?
It’s a lightweight Python library that uses DuckDuckGo’s public search endpoints to retrieve real-time search results — no scraping needed!
✅ Why use it?
- No API key required!
- Fast and free to use for small-scale, low-volume queries.
- Returns search results with titles, snippets, and URLs — perfect for grounding your LLM output.
⚠️ Limitations
- Limited to 10–30 results by default.
- May not have the full scope of Google, but very easy to use.
📚 2️⃣ Wikipedia API¶
🔍 What is the Wikipedia Library? It’s a Python wrapper for the Wikipedia API, allowing easy retrieval of summaries and page content.
✅ Why use it?
- Provide factual context and quick background on any topic.
- No API key needed — open source!
⚠️ Limitations
- Not always up-to-date like live news.
- Best for foundational knowledge.
🚀 Let’s see them in action!¶
- Google Search API to get the latest headlines.
- Wikipedia API to provide a short contextual summary.
We’ll compare these real-time results to the LLM-only output to see the difference!
from duckduckgo_search import DDGS
# Country codes for different regions
COUNTRY_CODES = {
'UK': 'uk-en', # United Kingdom
'FR': 'fr-fr', # France
'ES': 'es-es', # Spain
'DE': 'de-de', # Germany (standard)
'GLOBAL': 'wt-wt' # Worldwide
}
def ddg_news_search(query, country='GLOBAL', num_results=5, time_period='w'):
"""
Search news using DuckDuckGo with country and time filtering
Args:
query (str): Search keywords
country (str): Country code - 'UK', 'FR', 'ES', 'GE'/'DE', 'US', or 'GLOBAL'
num_results (int): Maximum number of results to return
time_period (str): Time filter - 'd' (day), 'w' (week), 'm' (month), None (all time)
Returns:
List of dictionaries with news articles including title, body, url, date, image, source
"""
try:
# Get region code
region = COUNTRY_CODES.get(country.upper(), 'wt-wt')
with DDGS() as ddgs:
results = []
# Use the news search with specified parameters
news_results = ddgs.news(
keywords=query,
region=region,
timelimit=time_period,
max_results=num_results
)
for article in news_results:
results.append({
'title': article.get('title', ''),
'body': article.get('body', ''),
'url': article.get('url', ''),
'date': article.get('date', ''),
'image': article.get('image', ''),
'source': article.get('source', ''),
'country_searched': country.upper(),
'region_code': region
})
return results
except Exception as e:
print(f"Error performing news search: {e}")
return []
import wikipedia
def get_wikipedia_summary(query):
try:
summary = wikipedia.summary(query, sentences=10)
return summary
except wikipedia.exceptions.DisambiguationError as e:
return f"Disambiguation page: {e.options[:5]}"
except wikipedia.exceptions.PageError:
return "No page found."
except Exception as e:
return f"Error: {str(e)}"
# 🔍 Example search & context
query = "Rolland Garros Results"
search_results = ddg_news_search(query, country='GLOBAL', num_results=5, time_period='w')
wiki_context = get_wikipedia_summary(query)
print("🔎 DuckDuckGo Search Results:")
for idx, result in enumerate(search_results, 1):
print(f"🔍 Result {idx}: {result.get('title')}")
print(f"URL: {result.get('url')}")
print(f"Snippet: {result.get('body')}\n")
print("📚 Wikipedia Context:")
print(wiki_context)
🔎 DuckDuckGo Search Results: 🔍 Result 1: Where to watch Roland-Garros 2025 today for free URL: https://www.mlive.com/tv/2025/05/where-to-watch-roland-garros-2025-today-for-free.html Snippet: Italy's Matteo Gigante casts his shadow on the court as he serves against Ben Shelton of the U.S.during their third round match of the French Tennis Open, at the Roland-Garros stadium, in Paris, Friday, May 30, 2025. (AP Photo/Thibault Camus) AP 🔍 Result 2: 2025 French Open brackets: Latest schedule, results from Roland Garros URL: https://www.msn.com/en-us/sports/tennis/2025-french-open-brackets-latest-schedule-results-from-roland-garros/ar-AA1FOS22 Snippet: Here are the latest results and schedule for the 2025 French Open: For a full list of results, visit the Roland-Garros 2025 tournament site. No. 6 Novak Djokovic (Serbia) vs. Filip Misolic (Austria) No. 3 Alexander Zverev (Germany) vs. Flavio Cobolli (Italy) No. 1 Jannik Sinner (Italy) vs. Jiri Lehecka (Czech Republic) 🔍 Result 3: French Open results 2025: Updated scores, bracket, seeds for men's and women's tennis singles at Roland-Garros URL: https://www.msn.com/en-ca/sports/other/french-open-results-2025-updated-scores-bracket-seeds-for-mens-and-womens-tennis-singles-at-roland-garros/ar-AA1FqNWP Snippet: Men's seedsSeedingPlayer1Jannik Sinner2Carlos Alcaraz3Alexander Zverev4Taylor Fritz5Jack Draper6Novak Djokovic7Casper Ruud8Lorenzo Musetti9Alex de Minaur10Holger Rune11Daniil Medvedev12Tommy Paul13Ben Shelton14Arthur Fils15Frances Tiafoe16Grigor Dimitrov17Andrey Rublev18Francisco Cerundolo19Jakub Mensik20Stefanos Tsitsipas21Tomas Machac22Ugo Humbert23Sebastian Korda24Karen Khachanov25Alexei Popyrin26Alejandro Davidovich Fokina27Denis Shapovalov28Brandon Nakashima29Felix Auger-Aliassime30Hubert Hurkacz31Giovanni Mpetshi Perricard32Alex Michelsen2025 French Open men's first-round drawEach match in the men's draw is handled in best of five sets. 🔍 Result 4: Carlos Alcaraz Talks 'Difficult' French Open Win vs. Damir Dzumhur at Roland-Garros URL: https://bleacherreport.com/articles/25200716-carlos-alcaraz-talks-difficult-french-open-win-vs-damir-dzumhur-roland-garros Snippet: Carlos Alcaraz said he "suffered quite a lot" during his third-round win over Damir Dzumhur during the 2025 French Open in Paris. 🔍 Result 5: French Open 2025 Saturday Schedule and Predictions for Roland-Garros Bracket URL: https://bleacherreport.com/articles/25200591-french-open-2025-saturday-schedule-and-predictions-roland-garros-bracket Snippet: From full schedules to predictions, everything you need is below. TNT will broadcast live from Roland-Garros starting at 5 a.m. ET. There will also be whip-around coverage on truTV at the same time, while all courts will stream live on Max. 📚 Wikipedia Context: The French Open (French: Internationaux de France de tennis), also known as Roland-Garros (French: [ʁɔlɑ̃ ɡaʁos]), is a tennis tournament organized by the French Tennis Federation annually at Stade Roland Garros in Paris, France. It is chronologically the second of the four Grand Slam tennis events every year, held after the Australian Open and before Wimbledon and the US Open. The French Open begins in late May and continues for two weeks. The tournament and venue are named after the French aviator Roland Garros. The French Open is the premier clay court championship in the world and the only Grand Slam tournament currently held on this surface. Until 1975, the French Open was the only major tournament not played on grass. Between the seven rounds needed for a championship, the clay surface characteristics (slower pace, higher bounce), and the best-of-five-set men's singles matches, the French Open is widely regarded as the most physically demanding tournament in tennis. == History == Officially named in French Internationaux de France de Tennis ("French Internationals of Tennis" in English), the tournament uses the name Roland-Garros in all languages, and it is usually called the French Open in English. In 1891, the Championnat de France, which is commonly referred to in English as the French Championships, began. This was only open to tennis players who were members of French clubs.
🌎🔎 News Round-Up with LLM + Tools¶
So far, we’ve seen how to:
✅ Use an LLM (GPT-4o) to generate plausible headlines (no real-time grounding).
✅ Use duckduckgo-search
and Wikipedia to gather current and contextual information.
🔍 Next Step: LLMs with Tool Calling¶
Instead of having us manually call each search function, let’s empower the LLM to:
1️⃣ Accept a high-level user query (e.g. “Give me a news round-up for France this week”),
2️⃣ Call the right tools (ddg_news_search
, get_wikipedia_summary
),
3️⃣ Integrate these real-time and contextual data,
4️⃣ Generate a polished, structured news round-up — grounded in real search results!
This illustrates:
🔴 LLM-only = creative but not real-time
🟢 LLM + Tools = current, factual, and more reliable
news_tool = {
"type": "function",
"function": {
"name": "get_news",
"description": "Search current news with DuckDuckGo. Returns latest news headlines for a topic.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query/topic."},
"country": {"type": "string", "description": "Country code (e.g. 'UK', 'FR', 'GLOBAL')."},
"time_period": {"type": "string", "description": "Time period for news (e.g. 'w' for week, 'm' for month)."}
},
"required": ["query", "country", "time_period"]
}
}
}
context_tool = {
"type": "function",
"function": {
"name": "get_context",
"description": "Fetch a Wikipedia summary for a topic.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Topic to look up in Wikipedia."}
},
"required": ["query"]
}
}
}
# 🧰 Tool functions (reusing yours)
def get_news(query, country, time_period):
return ddg_news_search(query, country, num_results=20, time_period=time_period)
def get_context(query):
return get_wikipedia_summary(query)
# 🪄 Register the tools
tools = [news_tool, context_tool]
messages = [
{
"role": "user",
"content": (
"Can you tell me what's happening about AI development in France this month? "
"And provide some context if needed. Please provide the source of the information with the url."
)
}
]
# 🪄 Let the LLM decide which tool(s) to call!
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
tool_choice="auto" # Let LLM decide if/what to call!
)
response_message = response.choices[0].message
if response_message.tool_calls:
print("🔧 LLM is calling tools...")
# Execute each tool call
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f" 📞 Calling {function_name} with args: {function_args}")
# Execute the appropriate function
if function_name == "get_news":
function_result = get_news(
query=function_args["query"],
country=function_args["country"],
time_period=function_args["time_period"]
)
elif function_name == "get_context":
function_result = get_context(
query=function_args["query"]
)
else:
function_result = f"Unknown function: {function_name}"
🔧 LLM is calling tools... 📞 Calling get_news with args: {'query': 'AI development', 'country': 'FR', 'time_period': 'm'} 📞 Calling get_context with args: {'query': 'AI development in France'}
⚡️ What the LLM Decided to Do¶
✅ Tool Calls:
First, it called the
get_news
tool with:query="AI development"
country="FR"
time_period="m"
Then, it called the
get_context
tool with:query="AI development in France"
✅ The LLM combined both real-time news updates and Wikipedia context in a single response plan!
Now let's use this to generate a news round-up for France this month on AI. We have to provide this information to the LLM in the prompt.
if response_message.tool_calls:
messages.append(response_message)
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
# Execute the appropriate function
if function_name == "get_news":
function_result = get_news(
query=function_args["query"],
country=function_args["country"],
time_period=function_args["time_period"]
)
elif function_name == "get_context":
function_result = get_context(
query=function_args["query"]
)
else:
function_result = f"Unknown function: {function_name}"
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": json.dumps(function_result, default=str)
})
final_response = client.chat.completions.create(
model="gpt-4o",
messages=messages
)
print(final_response.choices[0].message.content)
In May 2025, notable developments in AI in France include a significant business agreement by Cykel AI PLC. The company signed a commercial deal with Transpharmation Ltd to deploy Lucy, Cykel's AI recruitment agent. This move is expected to enhance AI applications in business settings (source: Zonebourse, [link](https://www.zonebourse.com/cours/action/CYKEL-AI-DEVELOPMENT-LIMI-62874329/actualite/Cykel-AI-PLC-decroche-un-accord-commercial-de-niveau-entreprise-avec-Transpharmation-Ltd-50099310/)). Additionally, Cykel AI is planning to raise £750,000 through a share placement to fund ongoing operations and implement a new cash reserve strategy (source: Zonebourse, [link](https://www.zonebourse.com/cours/action/CYKEL-AI-DEVELOPMENT-LIMI-62874329/actualite/Cykel-AI-prevoit-de-lever-des-fonds-par-le-biais-d-un-placement-d-actions-50071230/)). For context, AI development in France has been advancing steadily, with a focus on privacy and security. A key player in the field is the French startup Poolside AI, known for developing AI models that automate coding processes. Founded in the U.S., the company moved its headquarters to Paris in August 2024. It has secured significant funding and partnerships, emphasizing the country's growing influence in AI technology.
⚙️ How LLM Uses Tool Results for the Final Response¶
Here’s a breakdown of what happens in the handle_tool_calls_and_get_final_answer
function:
✅ Step 1: User Message → LLM Decides on Tool Calls
The user’s question is first sent to the LLM (model="gpt-4o"
).
Because of the tool_choice="auto"
, the LLM decides if it needs to call any external tools (like get_news
or get_context
) to provide a better answer.
✅ Step 2: LLM’s Tool Calls → Execute Functions
If tools are called, the system extracts the tool calls (response_message.tool_calls
).
For each tool:
- It reads the function name and arguments.
- It runs the real Python function (
get_news
orget_context
) with the arguments from the LLM. - It saves the results as tool outputs in the conversation.
✅ Step 3: Feed Tool Results Back to the LLM
The conversation (messages
) now includes:
- The user’s original question.
- The LLM’s tool calls.
- The actual tool results (like real search or context data).
We send this full conversation back to the LLM in a new chat completion call.
Here, the LLM uses the tool results as real information to:
- Generate a final, factual answer.
- Cite real sources and summarize the retrieved data.
import json
def handle_tool_calls_and_get_final_answer(user_message, tools, client):
"""
Complete tool calling workflow that returns the final answer
"""
messages = [{"role": "user", "content": user_message}]
# Step 1: Initial request - LLM decides which tools to call
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
tool_choice="auto"
)
response_message = response.choices[0].message
messages.append(response_message)
# Step 2: Check if tools were called
if response_message.tool_calls:
print("🔧 LLM is calling tools...")
# Execute each tool call
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f" 📞 Calling {function_name} with args: {function_args}")
# Execute the appropriate function
if function_name == "get_news":
function_result = get_news(
query=function_args["query"],
country=function_args["country"],
time_period=function_args["time_period"]
)
elif function_name == "get_context":
function_result = get_context(
query=function_args["query"]
)
else:
function_result = f"Unknown function: {function_name}"
# Add tool result to conversation
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": json.dumps(function_result)
})
# Step 3: Get final answer from LLM using tool results
print("🤖 Getting final answer from LLM...")
final_response = client.chat.completions.create(
model="gpt-4o",
messages=messages
)
return final_response.choices[0].message.content
else:
# No tools called, return direct response
return response_message.content
# Usage example
user_query = (
"Can you tell me what's happening about AI development in France this month? "
"And provide some context if needed. Please provide the source of the information with the url."
)
final_answer = handle_tool_calls_and_get_final_answer(user_query, tools, client)
print("\n" + "="*60)
print("🎯 FINAL ANSWER:")
print("="*60)
print(final_answer)
🔧 LLM is calling tools... 📞 Calling get_news with args: {'query': 'AI development France', 'country': 'FR', 'time_period': 'm'} 📞 Calling get_context with args: {'query': 'AI development in France'} 🤖 Getting final answer from LLM... ============================================================ 🎯 FINAL ANSWER: ============================================================ This month in France, there are significant developments in AI. A notable project involves the creation of the largest AI campus in Europe. This initiative is a joint venture involving several major players: MGX, BPI France, Mistral AI, and Nvidia. The project was announced during the "Choose France" summit. The campus is set to be located in the Île-de-France region and aims to significantly boost AI capabilities and infrastructure in Europe. Here are some sources for the detailed news coverage: 1. "[MGX, Nvidia, BPI France and Mistral AI will create the largest AI campus in Europe in France](https://www.channelnews.fr/mgx-bpi-france-mistral-ai-et-nvidia-creent-une-coentreprise-pour-construire-le-plus-grand-campus-ia-deurope-147794)" - ChannelNews 2. "[Choose France: A new AI campus in Île-de-France with BPI, MGX, Mistral, and Nvidia](https://www.lemondeinformatique.fr/actualites/lire-choose-france-un-campus-ia-en-ile-de-france-avec-bpi-mgx-mistral-et-nvidia-96873)" - Le Monde Informatique 3. "[MGX, Bpifrance, Nvidia, and Mistral AI to create the largest AI campus in Europe](https://www.usine-digitale.fr/article/mgx-bpifrance-nvidia-et-mistral-ai-vont-creer-en-france-le-plus-grand-campus-ia-d-europe.N2232196)" - L'Usine Digitale Additionally, in the broader context of AI, the French startup Poolside AI has made significant strides in the development of AI models that improve and automate coding processes. Founded by former GitHub CTO Jason Warner, the company has become a leading player after securing several rounds of funding and forming strategic partnerships, including with Amazon.
📝 Commentary on Results¶
✅ Quality & Relevance
- The final answer is well-structured and directly addresses the user’s question.
- The main headline is clear and supported with multiple sources, which adds credibility.
- The assistant also included background context (e.g., Poolside AI’s activities), demonstrating it used the Wikipedia summary tool to enrich the answer.
✅ Use of Sources
- It cited real URLs, which came from the
ddg_news_search
function. - 3 sources were clearly listed, and their names (like ChannelNews, Le Monde Informatique, L’Usine Digitale) match actual French tech publications, which boosts trust. Even if two of them are talking exactly about the same thing.
✅ Combining Tools
The LLM automatically chose to combine:
- Recent news via
ddg_news_search
- Broader context from Wikipedia
- Recent news via
This showcases how function calling in the API lets the LLM decide what’s relevant to provide a complete answer.
✅ No Hallucinations
- Because we supplied real search results to the LLM, the final output matches the real world—not just hallucinated text.
# Usage example
user_query = (
"Can you tell me what's happening about AI development in France and in Spain this month? "
"Make a comparison between the two countries."
"Do not repeat sources."
"And provide some context if needed. Please provide the source of the information with the url."
)
final_answer = handle_tool_calls_and_get_final_answer(user_query, tools, client)
print("\n" + "="*60)
print("🎯 FINAL ANSWER:")
print("="*60)
print(final_answer)
🔧 LLM is calling tools... 📞 Calling get_news with args: {'query': 'AI development', 'country': 'FR', 'time_period': 'm'} 📞 Calling get_news with args: {'query': 'AI development', 'country': 'ES', 'time_period': 'm'} 🤖 Getting final answer from LLM... ============================================================ 🎯 FINAL ANSWER: ============================================================ Here's a comparison of AI developments in France and Spain this month, based on recent news articles: **France** 1. **Cykel AI Developments**: Cykel AI PLC has made significant strides by signing an enterprise-level commercial agreement with Transpharmation Ltd. and is planning to raise funds to support its operations and cash reserve strategy. The integration of AI agents like Lucy is becoming crucial for business operations across various sectors [Zonebourse](https://www.zonebourse.com/cours/action/CYKEL-AI-DEVELOPMENT-LIMI-62874329/actualite/Cykel-AI-PLC-decroche-un-accord-commercial-de-niveau-entreprise-avec-Transpharmation-Ltd-50099310/). 2. **Ethical and Sustainable AI Strategy**: The European Council has called for an ethical, sustainable, inclusive, and human-centric strategy for the adoption of AI in science, emphasizing a European-level plan [Consilium](https://www.consilium.europa.eu/fr/press/press-releases/2025/05/23/council-calls-for-an-inclusive-ethical-sustainable-and-human-centric-strategy-for-the-uptake-of-ai-in-science/). **Spain** 1. **AI in Global Enterprises**: Companies like Meta are reaching significant milestones with AI, achieving a billion monthly users, suggesting a solid embrace and integration of AI technologies for business scalability [EL IMPARCIAL](https://www.msn.com/es-mx/dinero/noticias/meta-ai-alcanza-los-mil-millones-de-usuarios-mensuales/ar-AA1FFQ3e). 2. **Retail Transformation through AI**: The eRetail Congress 2025 in Spain highlighted transformation strategies in retail, focusing on omnichannel environments and AI-driven experiences, marking a strategic shift to enhance customer engagement and integrate technologies in retail [Europa Press](https://www.europapress.es/comunicados/empresas-00908/noticia-comunicado-eretail-congress-2025-estrategias-transformacion-entornos-omnicanales-experiencias-memorables-20250530134351.html). **Comparison** France is focusing on embedding AI ethically in science, aligning with EU strategies, while also advancing AI through funding and enterprise agreements like those seen with Cykel AI. In contrast, Spain is leveraging AI in consumer-facing industries such as retail and global tech enterprises like Meta, reflecting a market-driven integration of AI solutions to enhance user experiences and achieve scalability. Both countries are actively pursuing AI development, albeit with a focus on different sectors and strategies.
📝 Commentary on the Answer¶
✅ Comparison Delivered
- The assistant understood the comparative aspect and structured the answer by country, highlighting unique approaches.
✅ Contextual Layer
- Here the LLM did not use the Wikipedia context tool, this occurs when you let the LLM decide which tool to use.
✅ Real-Time & Verified
- Like before, the assistant grounded the output in real search results, not hallucinations.
🔍 Key Takeaway This output showcases the power of tool-based LLM calls:
- LLM alone would have no idea what’s truly happening now in France or Spain.
- LLM + Tools = credible, structured, and tailored answers based on real sources — a huge step beyond hallucination!