Skip to main content

TARA Frequently Asked Questions (FAQ)

Updated over a week ago

Q: Do I need to configure anything?

A: No. Tara works out-of-the-box with your existing UXCam setup. There is no need for extra tagging or event configuration.

Q: How is Tara different from standard LLMs (like ChatGPT)?

A: Standard LLMs require you to provide massive context to get good answers. Tara already "knows" your app structure and has watched the user sessions. She uses Visual Reasoning to understand the user experience (what the user actually sees) rather than just relying on code metadata.

Q: How do Tara credits work?

A: Credits are shared across your organization and reset every month. They can be used for both batch processing sessions and asking chat questions. The pool is shared, meaning there is no limit per specific app or user.

Q: Does Tara work with hybrid apps (Flutter/React Native)?

A: Yes. Because Tara relies on visual reasoning (analyzing the frames of the video) rather than just underlying code metadata, she is uniquely suited for frameworks where traditional text-based tracking often fails.

Q: Why do Tara's insights sometimes differ from my dashboard metrics?

A: Dashboards show "Logged Events" (what the code captured). Tara shows "User Reality." If a user clicks a button that fails to trigger an event due to a bug, a dashboard might show no activity, whereas Tara will report that the user experienced friction.

Did this answer your question?