Canada has been left out of a recent expansion of Google's artificial intelligence-powered chatbot known as Bard as the big tech giant continues its fight with the federal government over the Online News Act.
Canada has been left out of a recent expansion of Google's artificial intelligence-powered chatbot known as Bard as the big tech giant continues its fight with the federal government over the Online News Act.
As someone who has had to actively contact an AI company and expressly deny use of digital images on our website, I'm confident there are no boundaries with what they scrape from the Internet. They don't have any respect and slurp up everything in their path, which unfortunately leads to only one possible outcome - a culturally desensitised dataset. It will become the 'neutral average' of the internet, banal in many ways and biased in others. Don't expect anything that resembles Canadian sentiment to come out of any non-Canadian AI (Same problem but different locale for me - UK/Eire).
Any that's before it starts eating it's own tail - there's pictures of generative images where they fed data back in, and it's as interesting as it is amusing and horrific. Keeping the data clean is an unsolved problem because it's hard to differentiate organic and synthetic sources.
All of this pales into insignificance with (as far as I know) all AI lacking the ability to admit when it doesn't know. It just makes it up from nothing.
I have access to ChatGPT, Bard, etc. I haven't found a use for any of it yet (software engineering) where I trust it enough and the experiments I've run have proved this, for me personally. It's a novelty, a toy, it will evolve. As for the Online News Act, I'm conflicted. I believe in a free and open Internet, which goes against both restrictive legislation and against the likes of Google and Meta who abuse their position in the online landscape. My instinct is I'd rather have the Online News Act than Bard as the publisher should own their content, so good luck with that.
This! I see the hype around AI and it's like everyone has lost their mind. You wouldn't accept a statistical study without sampling info (dataset size, origin, selection, filtering, bias, reproductibility, etc). Why would we not ask the same with LLM or generative AI? It's like everyone got so excited about models built on large datasets that they forgot we already had procedures for handling data.