Header Image

Modern Large Language Models (LLMs) can generate and understand natural language, powering impressive conversational abilities. But by default, they’re limited to “text-in, text-out.” They can’t check stock prices, start a workflow, or send notifications unless you give them the capability to interact with external software.

Function calling (sometimes called tool calling) is how you connect LLMs to the outside world. It lets them interact with your APIs, systems, and actions in a controlled, auditable way, bridging the gap between language understanding and real operations.

What is Function (Tool) Calling in LLMs?

Instead of outputting only plain text, an LLM can request an action by emitting structured JSON specifying:

  • The tool or function to invoke (e.g. get_weather, check_stock_price, create_refund)

  • The arguments to send (parameter names & values)

Your application receives this JSON, validates and executes the action (such as querying a database or calling an API), and can then return the result back to the LLM.

Why Does This Matter?

  • Actionable AI: LLMs become operational agents. They can now actually retrieve market prices, trigger reports, run workflows, or send alerts.

  • Deterministic, safe actions: Prompts become structured, explicit calls, not just text to interpret.

  • Auditability and control: The host decides which actions are exposed, and how they’re checked.

  • Complex workflows: Multiple tools can be chained, results combined, and custom logic woven in.

Tool Calling, Step by Step

1. Define and Register Tools

Expose each tool/function with:

  • Name and description

  • Argument names, types, and constraints (often via JSON Schema)

Examples:

get_weather(city: string, country_code: string)
check_stock_price(ticker: string, exchange: string)
create_refund(order_id: string, reason: string)
list_orders(customer_id: string)
send_message(channel_id: string, text: string)
schedule_meeting(participants: [string], time: string)

Detailed stock tool JSON example:

{
  "name": "check_stock_price",
  "description": "Get the latest trading price for a given stock ticker symbol from a specified exchange.",
  "parameters": {
    "ticker": {
      "type": "string",
      "description": "The stock ticker symbol, e.g., AAPL, TSLA"
    },
    "exchange": {
      "type": "string",
      "description": "The stock exchange, e.g., NASDAQ, NYSE"
    }
  }
}

2. Describe Tools to the LLM

When a new task/conversation starts, send the LLM the list of available tools and their schemas. The LLM now “knows” what it can do and how to call each tool.

3. LLM Selects a Tool Call

Example user input:

“What was Apple’s closing price today on NASDAQ?”

LLM emits:

{
  "tool": "check_stock_price",
  "arguments": {
    "ticker": "AAPL",
    "exchange": "NASDAQ"
  }
}

4. Application Executes, Validates and Secures

  • Validate tool calls (types, permissions, abuse checks)

  • Execute the action (run API, DB query, workflow, etc.)

  • Log each call for auditing & debugging

  • (Optional) Require human sign-off on sensitive actions

5. Return Results and Repeat

  • Results are returned (structured, often as JSON)

  • LLM decides follow-up: inform user, chain further tools, summarize results, etc.

Tool-Calling in Action: Example

Suppose the user says:

“Show me the price of Tesla (TSLA) on NASDAQ.”

  1. Host sends tool schema (check_stock_price) to LLM

  2. LLM generates:

    {
      "tool": "check_stock_price",
      "arguments": {
        "ticker": "TSLA",
        "exchange": "NASDAQ"
      }
    }
    
  3. Backend executes, fetches price from stock API:

    {"ticker": "TSLA", "price": 698.54, "currency": "USD", "timestamp": "2024-06-13T16:00:00Z"}
    
  4. LLM then produces a friendly report:
    “Tesla (TSLA) closed at $698.54 USD today on NASDAQ.”

Chaining Example

“If Google drops more than 5% in a day, send me an SMS and news summary.”

Tool sequence:

  1. check_stock_price("GOOGL", "NASDAQ")

  2. get_stock_change_percentage("GOOGL", "1d")

  3. (if drop > 5%)
    a. get_stock_news("GOOGL")
    b. send_sms(number="...", text="Google dropped by X%. News: ...")

Creating Tools in Lamatic

Getting started with tools in Lamatic is quick and developer-friendly.

1. Access the Tools Section

  • Go to Lamatic Studio

  • Navigate to Connections > MCP/Tools

  • Click "+ Add Tool"

2. Configure Your Tool

INR to USD tool in Lamatic

  • Name: Descriptive function name (e.g., convertINRtoUSD)

  • Description: Clear statement of what the tool does

  • Parameters: Use JSON Schema to define input types & requirements

  • Code: Implement the function (Node.js/TypeScript or JS snippet)

Example parameter schema for a currency conversion tool:

Currency Converter Params in Lamatic

{
  "amount": {
    "type": "number",
    "required": true,
    "description": "Amount in INR to convert"
  }
}

Example code:

const response = await fetch('https://api.exchangerate-api.com/v4/latest/INR');
const data = await response.json();
const usdRate = data.rates.USD;
const result = input.amount * usdRate;
output = result;

Currency Coverter Tool Code in Lamatic

Lamatic handles code execution, just supply your logic.

Model Context Protocol (MCP): Standardizing LLM Tool Access

As tool-calling has grown, LLM ecosystems began adopting their own protocols which made integrating new tools and reusing them across products a challenge.

The Model Context Protocol (MCP) solves this with a unified, open standard for connecting LLM hosts (the apps) to tool providers (the servers).

MCP at a Glance

  • MCP Host: The environment embedding the LLM and interacting with users (web app, IDE, chat client, etc). Hosts connect to MCP servers and expose tools.

  • MCP Server: A standalone process providing tools (and sometimes resources/prompts), such as integrations for Git, Jira, billing APIs, custom internal tools, etc.

  • Communication: JSON-RPC 2.0 over transports like STDIO or HTTP/SSE.

  • Server advertises tools and schemas; hosts surface these as function-calling options to LLMs.

Types of MCP Interactions

  • Tools: Callable actions, e.g. “create ticket”, “run SQL query”

  • Resources: Standardized file, URL, or knowledge base access

  • Sampling: Enables advanced/agentic requests while host-side safety is enforced

With MCP, you build an integration as a server only once. Any compatible host (editor, chat, agent framework) can use your tools enabling truly portable, composable, and reusable AI capabilities.

Adding MCP to Lamatic

Lamatic makes integrating MCP seamless: here’s how to add an MCP server to your workflows:

  1. Navigate to MCP/Tools Integration

    • In Lamatic Studio, go to Connections > MCP/Tools

  2. Add MCP Server

    Github MCP Configuration in Lamatic

    • Click the "+ Add MCP Server" button

    • Enter:

      • Server endpoint

      • Type of MCP Server - SSE or HTTP

      • Authentication Header if needed (API keys, secrets)

    • Save the configuration

  3. Discover and Use Tools

    Notion MCP Tools Fetched for Selection

    • Lamatic will fetch and display available tools/resources exposed by the MCP server in MCP node or you can use it directly in AI Nodes.

    • You can now use these tools in your workflows, or enable them for LLM/agent backends

Benefits:

  • Instantly connect to a growing ecosystem of MCP-compatible tools and servers

  • Reuse integration logic across teams or projects

  • Mix and match internal and third-party tools in one standardized interface

Best Practices for Tool-Centric Applications

  • Design tools to be granular and single-purpose

  • Add permissions and approval flows as needed

  • Log every invocation for compliance/debugging

  • Test with both real and adversarial prompts for security

Unlocking True Agency With LLMs

Function (tool) calling is the essential pattern for moving from simple conversation to truly actionable, reliable AI. When combined with open standards like MCP, you unlock a future where:

  • LLMs observe live data (market prices, CRM records, news, system state)

  • LLMs take secure, audited actions (alerts, trades, API calls)

  • Workflows become composable and portable across hosts, users, and organizations

The key: harness LLM creativity with precise, secure, application-controlled tool invocation. This is the foundation for trustworthy operational AI agents, ready to handle your most valuable and complex use cases.

👇🏻 To continue learning attend our upcoming webinar

Keep Reading

No posts found