Whatever happened to Preplexity? They were all the rage a year or two ago, and now I hear...nothing. Is the product still being used? Making money? Or just overtaken by the base LLMs it was relying on?
They have been receiving a lot of hate on Reddit for a few weeks, since they started mass canceling Pro accounts. What seemed to be initially an attempt at preventing illegitimate accounts (aka those using coupons from grey market for instance) escalated into wavesof random accounts suspensions just for not having a credit card set, including legit ones that came as a package with ISP, bank accounts, etc.
It's still there. For Joe Shmoe, in terms of general purpose, ask it a question, LLM use, Perplexity is solidly in the following lineup, as I understand it:
- Perplexity: This one has been promoted on (insert general audience media skewing toward the older set) enough to be a household name still.
- ChatGPT: General people in some demographics (see immediately above) are averse to this, on account of negative publicity its parent company has received. (Still very strong popularity and positive sentiment in some demographics, though)
- Claude: Some semi-literates have glommed onto this one, possibly as a result of its more recent success among the developer set.
- Grok: People can be either for or against, based on how they feel about its owning company and its ownership; no more need be said
- Gemini: Again, if you are in the universe of its owning company (or decidedly not), the draw (or repulsion) can be strong here.
For general LLM use, the above are all about the same. To be clear, this is just me shooting from the hip for how each offering might be viewed. IMO, it's not a bad idea to submit the same input to each and see how they compare, if one is so inclined.
When better standalone LLMs got "web crawling skills" integrated, it pretty much destroyed the need to ever lean on PPLX again. Perplexity is actually not a bad product, but other services like ChatGPT and Claude can do it's best thing pretty good, and do other things much better.
One thing I noticed is that whatever harness PPLX wraps around the models, the output is noticeably lower quality in aggregate. I assume some kind of token compression being used before passing your query to a given model but to my knowledge that's never been proven or confirmed?
Anyways, I get the most value out of coding and PPLX has seemingly pivoted away from that. Probably a good play to not try and compete directly with Claude Code/Codex and find a better niche, but I am not sure who or what their market is. Lovely design, however.