I tested this with ChatGPT-5.1 and Gemini 3.0. Both correctly (according to Wikipedia at least) stated that George Olshevsky assigned it to its own genus in 1991.
This is because there are many words about how to do web searches.
Gemini 3.0 might do well even without web searches. The lesson from gpt 4.5 and Gemini 3 seems to be that scaling model size (even if you use sparse MoE) allows you to capture more long-tail knowledge. Some of Humanity's Last Exam also seems to be explicitly designed to test this long-tail obscure knowledge extraction, and models have been steadily chipping away at it.
Gemini 3.0 might do well even without web searches. The lesson from gpt 4.5 and Gemini 3 seems to be that scaling model size (even if you use sparse MoE) allows you to capture more long-tail knowledge. Some of Humanity's Last Exam also seems to be explicitly designed to test this long-tail obscure knowledge extraction, and models have been steadily chipping away at it.