How to Create an AI Chat with Gemini in Construct 3

UpvoteUpvote 7 DownvoteDownvote

Index

Features on these Courses

Stats

1,653 visits, 1,976 views

Tools

Translations

This tutorial hasn't been translated.

License

This tutorial is licensed under CC BY 4.0. Please refer to the license text if you wish to reuse, share or remix the content contained within this tutorial.

Published on 26 Aug, 2025.

Step 4: Making Your First API Call (A Stateless Request)

Now, let's dive into the core logic inside our Gemini.ts file. To communicate with Gemini, as indicated in the official Google documentation, we need to send a POST request to a specific endpoint.

The process is broken down into four fundamental steps:

  1. Assemble the Request URL: This URL tells Google's servers which AI model to use and authenticates our request with the API key.
  2. Prepare the Payload: The 'payload' is the data we send. For Gemini, this must be a specifically structured JSON object that contains the user's prompt.
  3. Execute the request using JavaScript's Fetch API.
  4. Parse the JSON response to extract the text generated by Gemini.

Let's translate these steps into a clean, reusable async function called ask:

const modelGemini = "gemini-2.5-flash"

export async function ask (obj :{key: string, question: string, runtime: IRuntime}):Promise<string> {
    const {key, question, runtime} = {...obj};
    const url = `https://generativelanguage.googleapis.com/v1/models/${modelGemini}:generateContent?key=${key}`;
    // const url = `https://generativelanguage.googleapis.com/v1beta/models/${modelGemini}:generateContent?key=${key}`;

    const payload = {
        contents: [{
            parts: [{
                text: question
            }]
        }]
    };

    try {
        const response = await fetch(url, {
            method: 'POST',
            headers: { 'Content-Type': 'application/json' },
            body: JSON.stringify(payload)
        });

        if (!response.ok) {
            throw new Error(`Server error: ${response.status}`);
        }

        const data = await response.json();

        const answer = data.candidates[0].content.parts[0].text;
        return answer;
    } catch (error) {
        console.error("Error details: ", error);
        return `An error occurred. (${(error as Error).message})`;
    }
}
  • 2 Comments

  • Order by
Want to leave a comment? Login or Register an account!
  • Good tutorial. I'm not sure how memory is normally achieved, but sending the entire chat back to the ai each prompt does not sound good. It will use an exponentially larger amount of tokens. Not sure how it will effect the free tier, but it will cost you a fortune if you have to pay.

    • Hi, thank you for the comment!

      You're right, sending the full history increases token usage. This method is necessary because models like Gemini are stateless and need that history for context. For a real application, it's vital to manage this to control costs. Common solutions include using a "sliding window" of recent messages or summarizing the chat.

      From a game design perspective, I also believe it's better to guide the AI instead of allowing completely free chat. Using structured output helps maintain creative control over the game's narrative, a topic I hope to explore soon.