
Bard Bungles B2B – Bing Better
Last month I looked at how Microsoft’s chatGPT-powered search engine performed for B2B research. I was pleasantly surprised at how useful it was. Read the full blog article here.
This month I’ve applied the same structured research and evaluation process to Google’s equivalent, Bard.
I can’t sugar coat this. Bard is currently pretty terrible at B2B research – at least, for the types of question I tested.
Google is up front that Bard is still a prototype, and we can expect it to change and evolve at a rapid pace. But as it stands today, I don’t see much use for Bard in B2B research. Bing Chat is far superior.
Here are a few examples to show what I mean.


Top of funnel queries – general information seeking
Bard did OK with the basics here. I noticed two big differences in comparison with Bing Chat:
- Bard doesn’t quote its sources unless you specifically ask for them, whereas Bing Chat often volunteered web links that were its reference material. I found this feature of Bing Chat very useful to build confidence in the answers. With Bard, I found myself having to ask follow-on questions like “what are your sources for that answer?” in order to sense check the results.
- Bard doesn’t suggest follow-on questions. You can type in your own, but there’s no guidance. I missed the suggested follow-on questions that Bing Chat provides.
Bard also had a habit of offering information that didn’t directly answer my question. For example when I asked “How can PV generation be integrated with industrial unit roofing?” Bard gave a good basic answer, but then rambled off to talk about the benefits of solar power. To me, it’s implicit in my question that I already know quite a bit about solar power, and that I’m going to be looking for something specific. So this extra off-topic content doesn’t add value. Bing Chat was typically much more focussed. Here are the Bard and Bing results side by side for comparison:


Middle of funnel queries – shortlisting suppliers
Again Bard’s answers weren’t very focussed and didn’t respect the specific context implicit in my question. For example when I asked Bard “What’s the most comprehensive source of construction industry leads in the UK?” it gave me a slightly useful answer, but then suggested I might want to try going to conferences or networking to get leads!

The Bing response is more concise and actually more valuable:

Things got a bit weird in places here. When I prompted Bard for more details it gave me a list of suppliers, but the links in the list went to the wrong places. For example a link for the service “Construction Lead Finder” pointed to an article about rogue builders on the Guardian newspaper! This just looked like a bug – fair enough for an experimental service, but it undermined my confidence in Bard as a tool.

Bottom of funnel queries – evaluating specific suppliers
Bard really lost the plot here on several queries. For example when I asked “Is the Curious Lounge a high quality coworking space?” it hallucinated several plausible-looking but completely imaginary reviews.


Latest score: Bing Chat 1 – Bard 0
It’s an uneven contest at present.
Bard rambles, forgets or ignores important context, and – worst of all – Just Makes Sh*t Up when it doesn’t know the answer. Bing Chat, in comparison, stayed focussed and was honest about its limitations.
I really can’t recommend Bard in its current form as a B2B research tool, and I don’t think we’ll see any great takeup of it in the B2B world unless or until Google improves it. For now: stick with Bing Chat for your B2B research.
But there is so much at stake here for the search engine giants that I’m sure we WILL see great improvements in Bard and other tools. We’ll keep testing and reporting on progress!