Blogs

Ride the AI Horse: Fundamentals First, Hype Later

By Anna Kourouniotis posted 3 hours ago

  

Picture this: it's Tuesday morning, you've just opened LinkedIn, and there are fourteen posts about a new AI feature that apparently changes everything — again. By the time you finish reading, two more have been published. You close the tab, go back to your PS Query, and wonder if you're already behind.

Here is my honest confession: I am exhausted trying to keep up. And I say that as someone who writes about AI tools, presents on AI, and genuinely thinks this stuff is worth your attention. The pace of change right now is relentless, and pretending otherwise would be doing you a disservice.

That decision — keep it basic — is what this article is about. Whether you are brand new to AI tools or have been dabbling for a few months without quite knowing if you're doing it right, this is for you. And it is especially for those of us in the HEUG community who live and breathe institutional data and need AI to actually work in that context.

So here goes!

YOUR INSTITUTIONAL KNOWLEDGE IS THE STARTING POINT

AI tools like ChatGPT, Copilot, and Claude are powerful — but they do not know your institution. They don't know how your office defines an "enrolled" student, what your DUID structure looks like, whether your headcount logic excludes concurrent enrollment, or that your PS Query pulls from a snapshot table that refreshes at 3:00 AM. You do!

This matters because AI amplifies what you give it. Hand it a vague question without institutional context and you'll get a generic answer that sounds credible but may be entirely wrong for your data environment. Before you type a single prompt, ask yourself: do I understand this data well enough to verify what AI gives me back? If the answer is no — and there is no shame in that — start there. Not with the AI tool. With the data.

PROMPTING WITH HIGHER ED CONTEXT

Once you know your data, prompting becomes your most important skill (despite what some self-proclaimed AI gurus tell you that prompting is "dead". It is not!). Think of a prompt the same way you would think about writing requirements for a Tableau dashboard or a PS Query: it needs an objective, context, constraints, and an expected output. Vague prompts get vague answers. The more institutional specificity you bring, the less the AI has to guess — and guessing is where things go sideways fast.

  "Help me analyze enrollment trends."

  "I am a data analyst at a public university using PeopleSoft Campus Solutions. I have a flat file export with term, academic plan, and headcount for fiscal years 2020–2025. Identify the five undergraduate programs with the sharpest enrollment decline. Summarize in plain language for a non-technical director, and flag any assumptions you make about the data."

That second prompt gives AI the guardrails it needs. And notice what it does at the end — it asks the tool to flag its own assumptions. That habit alone will save you from walking into a meeting with a number that nobody can explain.

THE RISKS THAT ARE SPECIFIC TO OUR WORLD

Higher education data includes variables that general-purpose AI tools were not designed with in mind: FERPA, data governance policies, institutional counting rules, suppression thresholds for small cell sizes, etc. An AI tool will not flag these for you. It will produce something that looks polished and plausible — and leave the compliance burden entirely on you.


Research confirms what many of us already suspect: users regularly fail to catch errors in AI-generated data analyses (Virk & Liu, 2025; Quantum Zeitgeist, 2025). For us, this can mean reporting incorrect enrollment figures to a state agency, making budget decisions on flawed retention data, or surfacing student information that should have been suppressed.


This is not a reason to avoid AI. It is the reason to stay in the loop — to be the person who knows the data, knows the rules, and knows enough to catch what the tool misses. That is not a burden. That is your value.

THREE FUNDAMENTALS EVERY HIGHER ED BEGINNER NEEDS

👉 Start here. Everything else builds on these three.


01  Know your data before you AI it. Understand the source, the population logic, and the counting rules before you prompt anything. AI doesn't know your snapshot schedule, your term structure, or what ‘active’ means in your system. You do — and that context has to come from you.


02  Prompt with institutional specificity. AI has no idea what a DUID is, what your fiscal year structure looks like, or that you exclude auditors from your headcount. Tell it — every single time. Treat your prompt like a requirements doc, not a search bar.


03  Verify like your director is watching. Cross-check every output against a known source of truth — a report you have run before, a count you trust. If the number feels off, it probably is. AI does not know that it is wrong. You have to be the one who notices.

THE BOTTOM LINE

Next time I will be presenting on AI it will be on the foundations — because that is what will still be true no matter what changes between now and then. How to think about a prompt. How to stay skeptical. How to be the human in the loop. Those things do not expire.


For those of us in the HEUG community, the advantage was never going to come from being the first to try a shiny new tool. It was always going to come from understanding our data deeply, asking sharp questions, and knowing enough to tell good output from plausible-sounding nonsense.


The horse is powerful. But you still need to know how to ride.

I LEAVE YOU WITH THIS

I'd like to finish this by adding a short YouTube clip from one of my favorite data authors: Cole Nussbaumer Knaflic talks briefly about how in almost every conversation someone drops the word "AI." Her overall take across mulitple platforms I have seen her interact in is that  "data does not make decisions, people do." 

🙃Thanks for reading my blog. Now, I would love to know what do you think about all this?  Have you been burned by the AI Hype before?

---------------------------------------------------

Sources

Virk, Y., & Liu, D. (2025, August 8). Non-programmers assessing AI-generated code: A case study of business users analyzing data. arXiv. https://doi.org/10.48550/arXiv.2508.06484


Quantum Zeitgeist. (2025, August 12). Users frequently fail to detect errors in AI-generated marketing data analyses. Quantum Zeitgeist. https://quantumzeitgeist.com/users-frequently-fail-to-detect-errors-in-ai-generated-marketing-data-analyses/

0 comments
5 views

Permalink