Let me start by saying what this article is NOT about.
This is not about the increasing presence of artificial intelligence (AI) and machine learning (ML) tools that are being used in the fundraising sector, primarily for the purposes of identifying, segmenting, and/or communicating with constituents. Used wisely by those who understand its limitations and its strengths, AI does have applications that can help fundraisers be more efficient, saving time (and therefore, money).
(It’s also not about the fact that nearly three quarters of the words I have typed so far for this article were pre-filled in as I typed them based on words and sentences I had written previously. That’s AI and ML for you: helpful, at times. Well, equal parts annoying and helpful, to be honest.)
This article is about being able to trust the resources we use to provide reliable data to frontline fundraisers. Data we produce that informs their decisions about whom they should ask for support and at what level. And, from a due diligence perspective, who is safe to ask to affiliate with a nonprofit.
Trust, but verify
I learned early in my career from my mentors to verify every piece of information that would go into a profile unless it came from a rock-solid source, like a newspaper or magazine, Who’s Who, an assessors database, or Standard & Poors. Not all blue-chip resources made the ultimate-trust list, though, just because it had a well-known name. (yes, I’m looking at you, D&B).
Generally speaking – newspapers and magazines made that trustworthy list because of the prevalence of fact-checking in journalism.
Obviously, things have changed over the last several years with the proliferation of media outlets catering to increasingly segmented audiences, but – as long as you consider the source and confirm stray-seeming threads – big-name periodicals have remained a reasonably safe bet for reliable, trustworthy information.
Enter the dragon
What was one of the first things you did when you signed up for ChatGPT? For prospect research folks, let me guess – you asked it to provide a definition of prospect research, right? Yeah, me too. It did a pretty good job, didn’t it.
But ask it to write a blog post on a topic related to prospect research and you get something so dull that it makes our endlessly-interesting profession sound like a month-long visit to the Museum of the History of Nuts, Bolts, and Screws, with the full two-hour introduction documentary filmed in 1967. In the original Flemish, with Danish subtitles.
But dull is one thing. Providing unchecked – and seriously flawed – information is entirely another.
Last week Men’s Journal published an article called “What All Men Should Know About Low Testosterone” that was written entirely by artificial intelligence but without giving readers a heads-up that a machine created the copy. It didn’t take long for people, including experts, to notice.
Bradley Anawalt MD, the chief of medicine at the University of Washington Medical Center told science and tech site, Futurism, that there were 18 factual errors in the short, 659-word article.
Futurism reported that “Some [errors] were flagrantly wrong about basic medical topics, like equating low blood testosterone with hypogonadism, a more expansive medical term. Others claimed sweeping links between diet, testosterone levels, and psychological symptoms that Anawalt says just aren’t supported by data.”
Now, I’m not saying that I’d trust Men’s Journal for medical advice over, say, the Harvard Health Letter, or the Mayo Clinic newsletter, but it’s not like it’s some quack click-bait-y site. As someone on Mastodon quipped the other day, Men’s Journal is “truly the SunnyD of journalism.” But, you know, SunnyD’s got some fruit juice in there.
We expect this from clickbait. But we’re talking about information from (formerly?) reliable outlets here.
Anyone who has been on the web for awhile can gennnnnerally guess when an article is click-bait. If not, it becomes pretty clear once you’ve gotten to whatever website they pulled you to.
But you expect Men’s Journal, owned by Arena Group, also publishers of Sports Illustrated and The Street (“Business news that moves markets, award-winning stock analysis, market data and investment ideas”) to publish reliable (or at least, vetted) info. You don’t feel like you should have to fact-check it at a basic level, especially when it’s providing health advice on such a serious topic.
But until it was fact-checked and fixed, that Men’s Journal article could have seriously affected someone’s health if they took its advice on testosterone levels.
Finance and technology, too
The same concern about negative impact goes for CNET, the site that’s been a resource on all-things technology since it debuted in 1994. CNET is held by Red Ventures, owners of other venerable media titles like ZDNet, Healthline, and Bankrate (“The trusted provider of accurate rates and financial information.”)
In November of last year, CNET quietly started publishing articles generated entirely by AI, which in certain cases aren’t such a bad thing, but it turns out this group of articles were lousy with errors. And the information that they did have right…was proven to have been plagiarized. From competitors.
Yeah, it’s not a good look.
I’ve got nothing against AI. In fact, as I said up top there are areas it’s been applied to in our industry (and beyond, of course) that have definite potential.
The problem is that I’ve used The Street and CNET as sources. If I were researching an athlete, Sports Illustrated would probably be one of my first stops. I probably would (have) put those publications on the “Do Not Need To Verify” list. Now they’re firmly on my “Grain of Salt” list.
Stuff like this undermines our confidence in outlets we’ve previously trusted. Or at least not mistrusted. Verifying more and more adds to the time it takes us to do our work.
What to do?
It’s going to get murkier as time goes by, as article generation technology is used more widely.
So first of all, make sure you’re double-checking bylines. Look for a real person (or several) listed as the authors and/or editors. After the serious cutting in newsrooms, much of what we read is created by freelancers instead of staff writers – and that’s okay. You can check LinkedIn or see other articles they’ve written if you’re concerned about their own fact-checking and you need to rely on the information they provide.
Keep it old-school: If the information is critical, verify it from at least two sources (that weren’t cribbing off each other). And if you’re not sure, a written disclaimer and a quiet word with the information recipient are your best CYA friends. Yesterday’s Washington Post has a good article on the use of AI, chatbots, and things you can do to get reliable information from search engines.
Finally, stay educated. Be aware of what the tech is capable of now, what its limitations are, and plan accordingly in terms of how you gather, synthesize, and present your own work. And if you can, support local, regional, and national journalism that works hard to vet the facts.