There is a quite notable difference between AI which is based upon a static LLM model vs. RAG AI which queries the internet for current information.
No doubt responses need to be verified from all of them - just like you need to verify answers from a Google search. If you are searching for facts or reference sources, you will very consistently find that both Perplexity and Consensus are considerably more useful and timesaving compared with a Google search. You cannot use non-RAG AI at all for this purpose.
Not if the topic you are searching for has been plagued by ads. I just asked Perplexity about compatibility issues of the Snapdragon X Elite chip. It reiterated Qualcommâs best-scenario-only claim that âThereâs reportedly only about a 10% performance loss for emulated apps compared to native ARM apps.â It did not mention independent tests have revealed that a very large proportion of emulated apps perform horribly and greatly reduce battery life, the number one selling point of Snapdragon-powered Windows laptops.
As a real human, I know to dig through Google results until I see the other side of the coin, before deciding on a purchase. AIGC services like Perplexity currently do not, and Iâm afraid they donât have the business incentive to do so in the future, either.
No, they donât. Not unless they build a business model in which they are paid by users for accurate results, rather than being paid by advertisers. (And a user-supported model will probably lead to a focus on industries â medical, legal, financial services â that are willing to pay a premium, rather than on general consumer use.)
I would suggest asking Perplexity questions like âwhat are good issues and bad issues with the X Elite Chip?â Or âShow me some articles expressing concerns about the X Elite Chip.â Or âWhat are some reason I might want to buy the X Elite Chip and what are reasons to buy a competitor?â
I could coerce Perplexity into questioning the advertised claims, but the model ultimately relies on the same pool of articles to generate a response. It seems the pool does not include independent test results â which is the âfacts and reference sourcesâ that really matters beyond content paid for by Qualcomm and manufacturers.
Consequently, Perplexity can tell me that many apps donât run (expected before product launch; same was true when Apple introduced the M1). What it cannot tell me is that emulated apps that do run perform poorly (unexpected; Appleâs Rosetta was much better in this regard).
This is, more or less, why the entire discipline of library science hasnât just been replaced by search engines. Deciding what sources are relevant is hard, and it gets harder the more specific the information youâre looking for.
This is the reason why I pay for kagi.com. I regularly use their Quick Answer feature, but also always with a grain of salt as it hallucinates, too, from time to time or jumps on the wrong bandwagon if the best-match search results lead into the wrong direction.
I think you are misunderstanding the Perplexity ads:
(1) The ads are separate from the AI response to your question. The actual AI algorithm that responds to your query and gives you a response and links/references is not advertising relateted at all.
(2) It is true that Perplexity recently began ads next to but distinct from the AI response for free accounts. The ads do not appear on paid (âPerplexity Proâ) accounts.
That seems like a pretty reasonable solution to me. Surely Perplexity needs to cover its expenses and earn a profit like any other business - how else could they survive? So you have a choice of a free advertising-based account or a paid ad-free account. Whatâs wrong with that?
Iâm sorry but you have misunderstood the entirety of what I was talking about.
I never mentioned ad banners, and I generally donât care if these are displayed somewhere on the web page, since itâs trivial to hide them once and for all with StopTheMadness.
The problem is, if a topic is plagued by ads (that is, the first few pages of Google search results consists of mostly advertisement and ad-style content), then Perplexity is bound to reiterate some advertised claims in its responses. Thus your claim that
is simply untrue.
I repeat: The algorithm draws responses from the web. If most of the web is ads (or, more precisely, most of the web traffic has gone to advertised and ad-style content), then the algorithm will spit out ads, too. The system is designed â unapologetically â to behave this way.
Donât want to be stuck in a bubble of ads? Google and process the results yourself.
Am I surprised by the favorite son of latter-day capitalism not trying to refute capitalism? Not at all.
OK I see your point there. But Library of Congress contains every book ever published - good, bad, and indifferent. Does that mean that the library is full of junk books? No - it simply means you need to figure out how to sort through what it is that you desire.
I think prompt engineering can help address much of your concern. Perplexity will even criticize itself if you ask âWhat are some criticisms of Perplexity.ai?â
There are many ways to make use of information provided by the Library of Congress or something similar. It goes without saying that some ways work better than the others, and some would not get what you want at all. We as human users possess the capability to choose our own ways. An algorithm does not. When an algorithm doesnât work for whatever reason, itâs not going to work. Period. That why, as @kewms has mentioned, machines are not yet replacing humans in this regard.
This is wishful thinking unless proven true. Even if it is true, the user must not be blamed for not being sufficiently skilled if the relevant skills have not been explained in any official or semi-official handbook. Itâs akin to blaming limitations of iOS/Android on the user not knowing how to jailbreak.
If you have a specific case you wish to share it would be helpful to review
No software and no technology is perfect. I have found Perplexity.AI and Consensus to both be useful in a very practical sense - both for personal use and for professional use. Perplexity consistently and quickly finds academic citations not found in long-accepted search indexes which do have big manuals.
No search technology or AI should be accepted unless/until validated by other means. That said - I think if Perplexity or some other RAG AI tool is not in your toolbox then you may be missing something that could be of big help no matter the reasons you do searches.
My field is closely related to theology and religious studies. I frequently hear faithful individuals explain that «if you do not believe in â â â , then you may (in a polite and respectful tone like yours) be missing something about life/nature/the universe/etc.» I consider most such arguments to be valid, even though they conflict with each other. And itâs always appreciated if â â â actually helps these individuals and/or make them feel better.
You acknowledged that
and I consider this single observation sufficient on its own to explain why any specific software could be unhelpful to certain individuals.
There is only one thing I would still like to confirm: when you stated (which was the one statement I specifically object to)
did you take potential time and effort on prompt engineering into consideration?
With Perplexity you will go very far with simple things like âShow me articles which disagree with XYZâ or âWhat are the reasons in the best and worst reviews for XYZâ or âWhat are reasons people choose competitors over XYZâ
If you have a real-world case where you think Perplexity does not work well letâs dicsuss that specific example.
I use (well, experiment with) several different AI platforms. I pay to use these AI platforms (except for one, where I am a beta tester).
IMO, none of them are ready for prime time. They may be great for generating bogus college papers, but thatâs about it. Asking the AI platforms to summarize a document is fine, but 90% of the time the summary omits key provisions. The AI platforms also do a poor job of spotting inconsistencies or even finding key terms.
Our brains took millenia to evolve. We are not constrained by binary logic, which drives computer programs and AI. Generative AI might evolve to simulate the way proficient people analyze and solve problems, but it will never be the same.
In short, I still rely on DevonThink and plan on relying on it for quite some time.
In my own experience it typically takes longer to get answers that satisfy me from AIGC than through Google search. A possible reason is that Iâm personally never comfortable with out-of-context statements. The context provided by AIGC always feels artificial and suspicious. Maybe this have to do with living in an ultra-low-trust society, though.
As for my specific case about the Snapdragon chip, Perplexity struggles because it apparently refuses to consider the comment sections of sites like Reddit, Youtube and developer forums (a trove of valuable information if you know how to make use of it), and sources of other languages than English. (China has a vibrant hardware review community, whose production is frequently referenced in English discussion, for example.) It is also not turning up very new (published in the last week or so) articles.
I use Google search to synthesize my own opinion (e.g. on whether Snapdragon stuff is worthy buying) from the search results and their linked content, which could be facts, observations or opinions. This workflow is nil possible without sufficient context. Search engines, for the excess of information they provide, are unparalleled in providing context about any topic.
I donât use search or AIGC for this purpose. My sourced-from-the-web reading materials are either already in DT or to be discovered through RSS feeds and newsletters. To me, the search engine or its potential replacement is strictly for retrieval of information.
No, it did not, even when explicitly instructed to look at these sites.
A useful exercise is to test these tools with material youâve already read or a topic you know well. They are, as noted above, BS engines, but itâs a lot easier to evaluate their responses when you already know the âcorrectâ answer.
Iâm not getting involved in your debate on Perplexity generally (my scientific field has had a reliable search engine for years and Iâve never struggled to find what Iâm looking for, and I use several non-Google search engines that Iâm happy with for non-academic stuff), but I did just want to note:
Google, and likely some other AI developers, actively downrank certain types of human content in their training (I.e. blogs, forums) (in fact Google does it in their search too). This does mean valuable content can be missing. As an example, there are thousands of posts across the internet querying how to do things on MacOS because Appleâs instructions are either missing or donât work. Heck, in this forum alone thereâs probably tens of queries about Apple software (not DT!) because people couldnât find the answer in Appleâs support pages. This can make AI (and Google) less valuable than a decent search engine since they donât actually find the answer.
[Iâm ignoring the ethics of all this content-stealing in this post. Itâs a separate topic.]