Hey Folks - when I am creating Gammas (and I know other folks who have this issue too), the AI often includes what sounds like a great stat or piece of information. But, it does not provide a source for the data and when I fact check it, the information is rarely correct. But, when I use Claude for example and ask for similar data, it is able to provide data I can fact check and it can link sources. I am putting this under bugs as I think that it is an issue that needs to get fixed. I can also cross post under ideas and requests.
Kari H... Yes, the reality is that Gamma is not strictly a research tool. However, with a single prompt and an aggressive approach, we can create detailed presentations using simple prompts. This significantly raises the barriers to evidence and the need for provenance. Perplexity first, Claude second. Both do a great job of showing the source, but Perplexity is smart if it doesn't infer too much, whereas Claude tends to infer too much from the given reference, and that kind of inference is where problems arise. I share your concerns about provenance and fact/validity checking. Yes, some might argue that content creators should beware, but the content isn't being created by me—it's created by Gamma. So, ultimately, the responsibility still lies with us, but we should expect some integrity in the process, such as providing sources and enough detail to validate the data or statements. This is a powerful topic and arguably the biggest challenge facing everyone in the AI space right now, especially as AI’s reach extends into intellectual property. It's not about replacing jobs like McKinsey’s, but about protecting my own job by avoiding the publication of unverified assertions and not wasting hours untangling what AI has complicated or misled me about.
My problem is that it is presenting content that isn't factually correct. So, why present it at all? I think many people take it as being factual. Other platforms are doing better, so I think Gamma should be too. I fact check all that I use in multiple places. It is just frustrating that Gamma is just flat out wrong so many times. Amanda R. the app above sent me to a place where Deepak had commented but I cannot upvote, nor will it let me comment as it says I need to be part of the slack Gammabassadors which I clearly am. So, I offer an upvote.
Kari H... Think about it this way: if enough people discuss conspiracy theories and make alternative facts seem real in sufficient volume, then AI will essentially pick up what we're talking about. Unless your prompt clearly states this upfront, which I do, but I'm not sure if Gamma operates that way. When I work in ChatGPT, Claude, or Gemini, I make things very clear—any facts, references, and citations provided come with a full APA citation and a link to the source. All facts labeled as "facts" must be cross-checked against reputable sources or journals with high citation and research integrity scores. As I started doing that, my output improved. Another thing I do is collate all the references I receive. And the force AI to only look from that set of references and forbid it from going outside of those researched articles. This works for me as I'm writing a guide on career progression, and I do a lot of research for it. But does not work for sporadic ad hoc work. Luckily, I still have an old mindset of not trusting everything completely, so I stay cautious. But now, we have an entire generation that might look at research through ChatGPT. Imagine the chaos caused by bad logic multiplied by uninformed people spreading facts that aren’t facts, just imagined interpretations. Winter is coming!
Kari H. don’t know if this helps, but what I’m exploring now is using Gamma within Perplexity Comet, which helps because:
Perplexity browser agent does the research and summary - with all sources listed
Gamma does the visualization - e.g. turning data points into charts, and putting sources into a list or nested card - all with one prompt btw
All accomplished in one view, no context switching
.png)