As the 2025 Digital Collegium Annual Conference approaches, we are excited to host Jim Sterne as our Oct. 1 keynote speaker.
Ahead of the conference, Link Journal had the opportunity to interview Sterne about the possibilities he sees in artificial intelligence and the topics he will cover in his keynote, “Next level prompts from tactics to strategy.”
Digital Collegium 2025 Annual Conference
Learn from diverse track sessions, an inspiring keynote and group discussions during the 2025 Annual Conference, happening Sept. 28-Oct. 1 in Grand Rapids, Michigan and online.

Q&A with Jim Sterne
(Editor’s note: Responses and questions have been mildly edited for conciseness and flow.)

What topics do you plan to cover in your keynote at the Digital Collegium Annual Conference?

I’m going to talk about how generative AI is so different from the computing that we’re used to. We’re used to:
Ask a question, get an answer.
Do a calculation.
Store this in a database.
Analyze this data set.
Generative AI is:
Please make something up.
Tell me a story.
Be creative.
Explain what that means.
That’s why we say that Generative AI is good for almost everything.

For an audience that works in the digital space in higher education, what do you think is exciting about all of these possibilities?

The first thing everybody is excited about is content creation. … But it really shines with strategy – the things that are higher-level marketing and PR and communications that you would do, if you had time. Generative AI can help you with content creation, so you have some more free time.
You would have more free time to use AI to identify prospect segments – whether it’s students or teachers or alumni or grant writers – and create personas in a heartbeat. And then you can turn around and find out, what is the strategic imperative to your administration? What message are they trying to get out? How does that change the message, and how you get that message out, in what cadence?
All of the things that we had been promised we could do with one-to-one marketing, we now have an opportunity to work on.

Putting yourself in the shoes of the higher education marketers who will be attending the Digital Collegium conference, what possibilities would you be most excited about for this technology?

I am, by nature, an enthusiast, so that’s a tough question, because there are so many different directions.
It can help me do my job better, faster, cheaper. I can do more higher-level thinking. It’s as if I suddenly had an assistant who was incredibly intelligent, but didn’t really know how to do my job. And I have to explain my job to it, and I find out what tasks it’s good at. And it turns out it can, with proper treatment, be very good at many of my tasks.
It can never do my job, so I’m not going to worry about that. But if some of this rote work on my desk today could be handed off to someone else, I would have more time to do some strategic thinking, and creative thinking. That’s exciting.

For organizations that are looking at getting started using generative AI, what are the ethical guardrails that you talk with people about?

The biggest issue is bias in the data. These large language models are based on all the text they could get their hands on: the entire internet and a million books. And the bias is already in the data. If you ask for a pilot and a flight attendant, it’s going to give you a male and a female. That’s just the nature of the beast. … Your job is testing whether the output is exhibiting that kind of bias or stereotype. And it’s not just gender. It applies to all data, because data is the result of human activity, and human activity is biased.
Number two is the problem of what some consider to be plagiarism. The latest court case said no, what comes out of a large language model is sufficiently transformed as not to be plagiarism, not to be a copyright issue, but it is the user’s responsibility to make sure that it’s safe for work.
Then there are the ethics of using the tool transparently. Did I use artificial intelligence to help me write my book? Yes. Did it write anything that is published in my book? No. I used it for ideation. I used it to come up with analogies. I used it for descriptions. I used it just to help me brainstorm, but I didn’t take the output and publish it as my own.
So, the ethics around being careful that the output is not biased, making sure that you’re not worrying you’re running up against problems of intellectual property, and then being transparent.

Are there common misconceptions about the use of AI that you’d want to address?

The first misconception is historical in nature. When these tools first came out, these companies were using anything you asked and anything you uploaded to train their next model. And everybody looked at that and rightly said, ‘In that case, nobody can touch it.’
But that changed. Now are there multiple ways you can control whether your data is used to train the next model or not. Also, you can just bring a large language model inside your firewall completely.
Part two is people believing the output. ‘I asked the computer a question. It gave me an answer. It must be correct.’ No, no it’s not. It’s like asking the question of a complete stranger. Can you trust a complete stranger? You don’t know, so you have to verify all of its output.
If you believe what comes out of it and you’re not aware of the bias, that’s a double fault; that’s a serious problem, and it’s your fault.
Then there’s the misconception about it’s going to take over the world and destroy everybody, or it’s going to make heaven on Earth, and nobody will have to work again. Those are both erroneous. We’re not going down either one of those paths.

You wrote your first book on AI, “Artificial Intelligence for Marketing,” in 2017, coming from a data science perspective. Looking back, did you anticipate how AI adoption has impacted marketing in the past few years?

I could have if I had been reading scientific papers, but I wasn’t; I was just looking at practical applications in the world. And like everybody else, I was surprised in November 2022 when Chat GPT 3.5 came out, and we all went, ‘Oh man, this thing can talk. OK, that’s a different animal.’
One of the things that I will make sure people understand is the progression of computers, starting with writing code, which is very specific. You tell the machine exactly what you want it’s to do.
Then we moved to machine learning, where you’re dealing in probabilities, and you’re getting predictions and likelihoods. Machine learning is the ability for the machine to look at the data and come up with probabilities, suggestions and likelihoods that are incredibly informative.
And then, a little bit further technically from machine learning, comes Generative AI. It’s wildly different. It’s very much not what we’re accustomed to. I’m used to a search engine, a database, an analytics tool. I’m used to asking a question, getting an answer or getting a link. And now it is a co-creator, and that’s different.

What’s your favorite part about doing what you do?

Watching people’s eyes light up. … I refer to myself as a professional explainer of what’s coming. And I pride myself on being able to look over the horizon and say, ‘Look, this is what’s going to happen. You should be paying attention to it today. We can do this much today, and tomorrow, we’re going to be able to do all this other stuff.’
Every book that I write, the last chapter is always a look at the future. And that’s fun. That’s science fiction, and the horizon is getting closer and closer.
I refer to myself as a professional explainer of what’s coming. And I pride myself on being able to look over the horizon and say, ‘Look, this is what’s going to happen. You should be paying attention to it today. We can do this much today, and tomorrow, we’re going to be able to do all this other stuff.’

Jim Sterne
Keynote speaker, 2025 Digital Collegium Annual Conference