Oh man let me add onto that!
4. If you read about a new Gemini model, you might want to use it - but are you using @google/genai, @google/generative-ai (wow finally deprecated) or @google-ai/generativelanguage? Silly mistake, but when nano banana dropped it was highly confusing image gen was available only through one of these.
5. Gemini supports video! But that video first has to be uploaded to "Google GenAI Drive" which will then splices it into 1 FPS images and feeds it to the LLM. No option to improve the FPS, so if you want anything properly done, you'll have to splice it yourself and upload it to generativelanguage.googleapis.com which is only accessible using their GenAI SDK. Don't ask which one, I'm still not sure.
6. Nice, it works. Let's try using live video. Open the docs, you get it mentioned a bunch of times but 0 documentation on how to actually do it. Only suggestions for using 3rd party services. When you actually find it in the docs, it says "To see an example of how to use the Live API in a streaming audio and video format, run the "Live API - Get Started" file in the cookbooks repository". Oh well, time to read badly written python.
7. How about we try generating a video - open up AI studio, see only Veo 2 available from the video models. But, open up "Build" section, and I can have Gemini 3 build me a video generation tool that will use Veo 3 via API by clicking on the example. But wait why cant we use Veo 3 in the AI studio with the same API key?
8. Every Veo 3 extended video has absolutely garbled sound and there is nothing you can do about it, or maybe there is, but by this point I'm out of willpower to chase down edgy edge cases in their docs.
9. Let's just mention one semi-related thing - some things in the Cloud come with default policies that are just absurdly limiting, which means you have to create a resource/account, update policies related to whatever you want to do, which then tells you these are _old policies_ and you want to edit new ones, but those are impossible to properly find.
10. Now that we've setup our accounts, our AI tooling, our permissions, we write the code which takes less than all of the previous actions in the list. Now, you want to test it on Android? Well, you can:
- A. Test it with your account by signing in into emulators, be it local or cloud, manually, which means passing 2FA every time if you want to automate this and constantly risking your account security/ban.
- B. Create a google account for testing which you will use, add it to Licensed Testers on the play store, invite it to internal testers, wait for 24-48 hours to be able to use it, then if you try to automate testing, struggle with having to mock a whole Google Account login process which every time uses some non-deterministic logic to show a random pop-up. Then, do the same thing for the purchase process, ending up with a giant script of clicking through the options
11. Congratulations, you made it this far and are able to deploy your app to Beta. Now, find 12 testers to actively use your app for free, continuously for 14 days to prove its not a bad app.
At this point, Google is actively preventing you from shipping at every step, causing more and more issues the deeper down the stack you go.
12. Release your first version.
13. Get your whole google account banned.
> 4. If you read about a new Gemini model, you might want to use it - but are you using @google/genai, @google/generative-ai (wow finally deprecated) or @google-ai/generativelanguage? Silly mistake, but when nano banana dropped it was highly confusing image gen was available only through one of these.?
Yeah, I hear you, open to suggestions to make this more clear, but it is google/genai going forward. Switching packages sucks.
> Gemini supports video! But that video first has to be uploaded to "Google GenAI Drive" which will then splices it into 1 FPS images and feeds it to the LLM. No option to improve the FPS, so if you want anything properly done, you'll have to splice it yourself and upload it to generativelanguage.googleapis.com which is only accessible using their GenAI SDK. Don't ask which one, I'm still not sure.
We have some work ongoing (should launch in the next 3-4 weeks) which will let you reference files (video included) from links directly so you don't need to upload to the File API. We do also support custom FPS: https://ai.google.dev/gemini-api/docs/video-understanding#cu...
> 6. Nice, it works. Let's try using live video. Open the docs, you get it mentioned a bunch of times but 0 documentation on how to actually do it. Only suggestions for using 3rd party services. When you actually find it in the docs, it says "To see an example of how to use the Live API in a streaming audio and video format, run the "Live API - Get Started" file in the cookbooks repository". Oh well, time to read badly written python.
Just pinged the team, we will get a live video example added here: https://ai.google.dev/gemini-api/docs/live?example=mic-strea... should have it live Monday, not sure why that isn't there, sorry for the miss!
> 7. How about we try generating a video - open up AI studio, see only Veo 2 available from the video models. But, open up "Build" section, and I can have Gemini 3 build me a video generation tool that will use Veo 3 via API by clicking on the example. But wait why cant we use Veo 3 in the AI studio with the same API key?
We are working on adding Veo 3.1 into the drop down, I think it is being tested by QA right now, pinged the team to get ETA, should be rolling out ASAP though, sorry for the confusing experience. Hoping this is fixed by Monday EOD!
> 8. Every Veo 3 extended video has absolutely garbled sound and there is nothing you can do about it, or maybe there is, but by this point I'm out of willpower to chase down edgy edge cases in their docs.
Checking on this, haven't used extend a lot but will see if there is something missing we can clarify.
On some of the later points, I don't have enough domain expertise to weight in but will forward to folks n the Android / Play side to see what we can do to streamline things!
Thank you for taking the time to write up this feedback : ) hoping we can make the product better based on this.
Hi there! I am the PM for Veo on the Gemini API. I wanted to check with you on Point 8 - getting garbled sound when extending the video. Veo 3.1 is limited to the last 24 frames, 1s of video for the extension feature so sometimes dialog and audio are lacking continuity. We are working on this limitation. If you are experiencing a different issue altogether, would you be able to share the prompt so I can debug on my end? Thank you!