Adding AI-powered reviews

Sometimes, the best AI feature is a subtle one. In our Friendly Eats app, we wanted to automatically suggest a star rating as a user wrote their full review. This not only reduces the friction of submitting a review, but had the unexpected benefit of helping users understand how others might perceive their review.

As the user types in the dialog, the star rating automatically updates based on the content of their review. Importantly, the suggested star rating can also be overridden, because we want the reviewer to have the final say!

A video of the AI powered review feature running
A video of the AI powered review feature running

Read on to see what decisions we made to implement this new feature.

Design considerations

Client side or server side SDK

We chose to use a client side SDK, in this case Firebase AI Logic. The primary reason that we chose to use Firebase AI Logic is to protect our Gemini API key.

Using a server side SDK will also protect our API keys, but considering that we are not pulling in massive amounts of data and are just running quick inference, we decided to settle on Firebase AI Logic.

First, Firebase AI Logic lets us use an on-device model if one is available, like Chrome’s Gemini Nano. That can help limit a significant number of calls to our Gemini API key.

Remote Gemini calls made with AI Logic go through a proxy before reaching the Gemini model endpoints. The proxy stores the API keys so that we don’t need to include our Gemini API key in the client-side code. We can then make requests to the proxy and the proxy validates our app and forwards the API keys to the appropriate endpoint (either Vertex AI with a service account or Google Generative AI with an API key).

Another feature that Firebase AI Logic provides which helps keep our endpoints secure from abuse is automatic integration with Firebase App Check to ensure that requests originating to our AI endpoints come from our app. This is done through an attestation provider like ReCaptcha Enterprise and its invisible attestation mechanism. It may add a few ms before the first request to Gemini in the app, but is otherwise seamless to the user.

Model choice

Picking a model is a tradeoff between cost, speed, and quality. Normally, I start using the largest model and work my way down to the smaller models to determine whether the trade offs from a larger model to a smaller model make sense. Since we are really just asking for a small output of a star rating between 1 and 5, it’s likely that the smaller models will provide the results that we are looking for without the cost and slower performance of a larger model. If the star rating is off, we can fall back on our functionality that lets the reviewer update the rating themselves.

We narrowed down our choices to Gemini Nano from the Hybrid SDK and Gemini 2.5 Flash Lite as the fallback option. This allows us to default to the built in AI API for users on supported browsers while falling back to Gemini Flash Lite for our other users. This decision can help us keep costs down for providing this feature as there is no cost for using Gemini Nano from the Hybrid SDK and the cost of Gemini 2.5 Flash Lite is much lower than other models while providing an acceptable amount of fidelity.

Context engineering

We made two important decisions to design the most reliable prompt we could.

First, we used structured output so we only receive a JSON value containing the star rating back from the model. This helps prevent overly verbose responses for why a star rating was given,constraining the outputs to something that we can parse in our client code.

The second thing we did to increase the accuracy was give examples in our prompt to guide our model on what we would consider a high rating versus a low rating. This is a great way to tune responses to better fit the culture of your app’s userbase.

prompt.prompt
Based on the supplied review, translate the review
to a star rating that is equivalent:        

<example>
<input>The food was wonderful and I really
  enjoyed the amazing experience</input>
<output>5</output>
</example>

<example>
<input>I can say for certain I will never eat here again</input>
<output>1</output>
</example>

Input : ${userInput}
Copied!

Securing the database

We are using Firestore to store our reviews and restaurants. Firestore is secured using Firestore security rules. While our original app implementation still needed robust security rules, we did not need to change anything from the original implementation to this implementation as we are not using any new methods to update the reviews. Having said that, we think it’s important to call out what robust security rules would look like for submitting a new review, so I have included what those look like here:

firestore.rules
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    // ommitting for brevity
    match /restaurants/{restaurantId} {
      // ommitting restaurants rules for brevity

        // Ratings:
        //   - Authenticated user can read
        //   - Authenticated user can create if userId matches
        //   - Deletes and updates are not allowed (default)
        match /ratings/{ratingId} {
          allow read;
          allow create: if request.auth != null
                        && request.resource.data.text != ""
                        && request.resource.data.rating in [1, 2, 3, 4, 5];
          allow update: if request.auth != null
                        && request.resource.data.userId == request.auth.uid
                        && request.resource.data.text != ""
                        && request.resource.data.rating in [1, 2, 3, 4, 5];
        }
      }
    }
}
Copied!

Here you can see that we do place constraints on the type of reviews that can be submitted. Since we are placing a high value on the text, we make sure that the text is not null. We also want to make sure that since the rating is AI generated, if the AI happens to return a value of six or even negative one, it cannot be written to the database and instead, we can enforce that the value is within an acceptable range through this data validation call.

The code

To wrap this up, let’s take a look at how the actual implementation of this review component looks without all the framework considerations involved and highlight the important parts of our implementation.

Setting up AI Logic

To setup AI logic, we want to use the hybrid model experience where it relies on the browser model (Gemini Nano) and can fall back to the affordable flash lite model when the browser model is not available.

main.ts
import {
getAI,
GoogleAIBackend,
getGenerativeModel,
InferenceMode,
Schema } from 'firebase/ai';
import { initializeApp } from "firebase/app";

// Your web app's Firebase configuration
const firebaseConfig = {
//... Config here.
};

// Initialize Firebase
const app = initializeApp(firebaseConfig);
const ai = getAI(app, { backend: new GoogleAIBackend() });

// Setup the output schema
const jsonSchema = Schema.object({
properties: {
  starRating: Schema.integer({
    description: "a value of 1, 2, 3, 4, or 5 to apply to the rating",
    format: "int32"
  }),
}
});

// setup the generative model calls.
const genModel = getGenerativeModel(ai, {
mode: InferenceMode.PREFER_ON_DEVICE,
inCloudParams: {
  model: 'gemini-2.5-flash-lite',
  generationConfig: {
    responseMimeType: "application/json",
    responseSchema: jsonSchema
  },
},
onDeviceParams: {
  promptOptions: {
    responseConstraint: jsonSchema,
  }
}
});
Copied!

Here you can see that we initialize Firebase AI logic, much the same way that we initialize other services by first getting the AI logic singleton through a call to getAI. Once we have the AI singleton, we then declare the structured output schema that we are looking to use through calling Schema.object(). This helps us define the schema and guide the model to what each value corresponds to through the use of descriptions and additionally, we can provide optional format options to further specify the format of the response value. Finally, we make a call to getGenerativeModel() which allows us to specify our options for our requests.

Debouncing input

Since it still takes time to process the inputs into the generative model, we generally don’t want to overload the model with requests on every single key press. Instead, what we want to do is send a request after a set amount of time (500 ms or longer) from the last key press and if a new keypress has started, cancel the existing requests by cancelling the timeout request.

main.ts
let timeoutIds: number[] = [];

const handleKeyUp = (e: KeyboardEvent) => {
switch (e.key) {
  case "Shift":
  case "Control":
  case "Meta":
  case "Alt":
    return;
  default:
    break;
}

timeoutIds.forEach(clearTimeout);
timeoutIds = [];

const timeoutId = window.setTimeout(() => {
  // Ommitted logic for generating stars here...
}, 500);

timeoutIds.push(timeoutId);
};

const reviewBox = document.getElementById('review');
reviewBox?.addEventListener('keyup', handleKeyUp);
Copied!

Here we ignore the keys that do not change the input for requests which are the modifier keys. After we ignore those keys and return early from the handleKeyUp function, we clear the timeout using the browsers clearTimeout function and empty the list of timeouts that we are waiting on. We then start a new timeout that runs inference on the user’s input (which we will see in the next section) and add that back to the timeouts array.

Generating stars

Here we are actually running the code that runs inference.

main.ts
const prompt = `
Based on the supplied review, translate the review
  to a star rating that is equivalent:
<example>
  <input>The food was wonderful and I
    really enjoyed the amazing experience</input>
  <output>5</output>
</example>

<example>
  <input>I can say for certain I will never eat here again</input>
  <output>1</output>
</example>

Input : ${(e.target as HTMLInputElement).value}`;
const result = genModel.generateContent(prompt);
result.then(v => {
console.log(v.response.text())
const stars = JSON.parse(v.response.text()).starRating;
console.log(stars);
ratingPicker.updateStars(stars);
}).catch(err => {
console.error("could not generate star rating %s", err)
});
Copied!

As you can see, this is a call to generateContent from our previously defined model which we then parse the JSON for to get our structured output. Once we have that, we can call a function to update the star value in our application.

Full implementation

While the code snippets above show the feature in a Typescript format, we thought we would share the implementation as a react component here:

App.tsx

App.tsx
"use client";

import { useEffect, useMemo, useState } from "react";
import { Stars } from "./Stars";

import {
getAI,
getGenerativeModel,
GoogleAIBackend,
InferenceMode,
Schema,
} from "firebase/ai";
import { initializeApp } from "firebase/app";

// Initialize the Firebase AI Logic client SDK
const app = initializeApp(
{
  // Your web app's Firebase configuration
}
);
const ai = getAI(app, { backend: new GoogleAIBackend() });

// Set up the schema and model
const starSchema = Schema.object({
properties: {
  starRating: Schema.number({
    description: "a value of 1, 2, 3, 4, or 5 to apply to the rating",
  }),
},
});
const genModel = getGenerativeModel(ai, {
mode: InferenceMode.PREFER_ON_DEVICE,
inCloudParams: {
  model: "gemini-2.5-flash-lite",
  generationConfig: {
    responseMimeType: "application/json",
    responseSchema: starSchema,
  },
},
onDeviceParams: {
  promptOptions: {
    responseConstraint: starSchema,
  },
},
});

// Call the generative model and return the generated star rating
async function getStarRatingFromReview(reviewText: string) {
const starRatingPrompt = `
Based on the supplied review, translate the review
to a star rating that is equivalent:

<example>
  <input>The food was wonderful and I
  really enjoyed the amazing experience</input>
  <output>5</output>
</example>

<example>
  <input>I can say for certain I will never eat here again</input>
  <output>1</output>
</example>

Input : ${reviewText}
`;
const { response } = await genModel.generateContent(starRatingPrompt);
const stars = JSON.parse(response.text()).starRating;
return stars;
}

function debounce(callback: Function, delay: number) {
let timer: undefined | ReturnType<typeof setTimeout> = undefined;
return (...args: unknown[]) => {
  clearTimeout(timer);
  timer = setTimeout(() => callback(...args), delay);
};
}

function App() {
const [reviewText, setReviewText] = useState("");
const [stars, setStars] = useState({ score: 1, aiGenerated: false });
const [generatingRating, setGeneratingRating] = useState(false);

const generateStarRating = useMemo(
  () =>
    debounce(async (review: string) => {
      console.log("debounce triggered");
      setGeneratingRating(true);
      const generatedStars = await getStarRatingFromReview(review);
      setGeneratingRating(false);
      setStars({ score: generatedStars, aiGenerated: true });
    }, 200),
  []
);

useEffect(() => {
  if (reviewText.length > 0) {
    generateStarRating(reviewText);
  }
}, [generateStarRating, reviewText]);

return (
  <>
    <textarea
      id="review-text"
      className="review-text"
      value={reviewText}
      onChange={(e) => setReviewText(e.target.value)}
      placeholder="Add your review..."
      autoFocus
    ></textarea>
    <Stars
      rating={stars}
      setRating={(score: number) => 
        setStars({ score, aiGenerated: false })}
    />
    {generatingRating && <span>Automatically generating a rating</span>}
  </>
);
}

export default App;
Copied!
Stars.tsx
export function Stars({
  rating,
  setRating,
}: {
  rating: {score: number, aiGenerated: boolean};
  setRating: Function;
}) {
  return (
    <div className="stars-container">
      <span>Star rating</span>
      <menu className="stars">
          {[...Array(5).keys()].map((v, i) => (
          <>
              <input
              type="radio"
              id={`${5 - i}-stars`}
              name="rating"
              value={5 - i}
              checked={5 - rating.score === i}
              onChange={(e) => setRating(parseInt(e.target.value))}
              />
              <label htmlFor={`${5 - i}-stars`} className="star">
              &#9733;
              </label>
          </>
          ))}
      </menu>
      <span>{rating.score}{rating.aiGenerated && <span></span>}</span>
    </div>
  );
}
Copied!

Summary

Adding small magical moments to your applications using AI doesn’t have to be complicated and can greatly delight and enhance users’ experiences of your application. We can also secure our access to Gemini to not only control costs, but to make sure that the AI cannot be susceptible to rampant abuse. Picking out smaller (and sometimes even local models) can help limit the costs that we will experience from adopting AI into our application. Finally, crafting the right prompt guides the AI to help return the right experiences for our users.

Let us know on X and LinkedIn how you are thinking about adding AI to your existing applications!