HEUG AI Innovators Network

 View Only
  • 1.  AI Governance and Regulation on school level (not just teaching!)

    Posted 07-28-2025 04:05 AM

    Hello AI Innovators!

    I'd love to hear from schools who are implementing (or indeed have already implemented) governance, regulation and guidelines for staff - and I don't just mean teaching staff!

    While it's lovely to see collegues become increasingly confident and efficient with Gen AI, it is quite concerning to see it being used unnecessarily. I can't help but our carbon footprint growing expontentially!

    To schools exploring this:

    • Who is leading the conversation? A transversal AI team? Or someone from RSE, perhaps?
    • What clear messages or guidelines have you come up to help people to discern when it makes sense to use and not use AI?
    • Have you come up with any calculations to drive this with data? (For example, "if it'll take you less than X minutes without AI, do it yourself!".

    And indeed if you're a developer:

    • What conversations are being held to ensure users can always opt in or out of AI functionalities within a solution?
    • Is work being done to inform users of the environmental impact of particular actions/funcionalities?

    Looking forward to hearing from you!

    Sarah



    ------------------------------
    Sarah Clarke
    Process Governance & Continuous Improvement
    ESADE
    ------------------------------

    Message from the HEUG Marketplace:
    ------------------------------
    Find, Review, and Engage with Higher Education-focused solution providers, products, and services using the HEUG Marketplace.
    ------------------------------
    Alliance 2026 Registration is Open!


  • 2.  RE: AI Governance and Regulation on school level (not just teaching!)

    Posted 07-29-2025 11:57 AM

    Hi Sarah, while I am on the vendor side, I spend a good portion of my time working with schools on AI in my role. I will share some of my observations...

    1. More and more I am seeing centralized AI guidance being handed down to the departments. The guidance today is very light or something I would categorize as broad strokes. We are not seeing these groups dictate a specific model or piece of software, but more standard procedures that should be followed. We are also seeing the beginnings of an AI HECVAT type questionnaire, though the questions on these today are somewhat immature and miss a lot of the finer points. Schools get frozen in indecision waiting for guidance they have been told is coming, but is still hung up. This is not a fun place to be.
    2. An important factor all schools should understand is that every piece of software they own is charging forward with AI regardless if you have approved it. It is being embedded, is usually opt-out and often you have no control of what's behind the curtain. This is somewhat scary, so be diligent, but also understand the train is moving whether we want it to or not. It is not slowing down for policy to be formed. 
    3. In the guidance I have seen or been involved in, there is a lack of understanding around the use case diversity. Often the guidance is weighted toward research or personal co-pilot tool use cases and misses on support, self service and agent use cases for enterprise software. This creates confusion and slows down adoption.
    4. I encourage all our clients to think about problem solving. Don't go into a situation wondering how AI can help. Define a pain point, quantify it, and brainstorm how to address it. Sometimes AI is the right tool, sometimes it is not. Be open minded. And develop an ROI model for each planned use case. We do this for the PeopleSoft AI Agent work we do. You would be amazed at how much ROI there can be if you saved everyone just 5-10 clicks per month.
    5. In terms of developing, our approach is to make it transparent when AI is used and what data will be sourced in the use case. For example, we do this with our AI Job Posting Agent. It will tell the user, "Hey, I am about to send jobcode, position title, location, hours per week, salary range, etc to AI in order for me to write your job posting. Do you want to proceed?" This is a good pattern to observe and builds trust.
    6. Re: Environment, I have not seen wide attention on this, but this one is tricky. For example, if the AI Agent saves 20 round trips to the web application and saves the user 30 mins, what is the net effect? It is hard to quantify. While AI does take power to run, it can save power in other ways. I have not seen a good study on the net effect here to date. Another interesting thought here is that energy use and data privacy can be competing priorities. A dedicated model is the most secure, but then you are not sharing hardware, so it is less conscious of resources. That is an interesting decision to tie break.

    Moving forward, I would prioritize the following:

    • Use AI where you can choose the model and host; this gives you control and understanding. 
    • Not all models are the same. Cheaper price for the software or bundled in free is often a sign of smaller models. While sometimes those can be fine-tuned to do well, they also can perform more poorly at maintaining guardrails. As always, nothing is ever really free.
    • Be conscious of your data and where it is going and what the risks are. For example, OpenAI just recently said all your chats are discoverable in legal cases. Maybe choosing a model which is not owned by the host is a good idea?
    • Understand what really is a PII data risk and what is not. Often the use of GenAI with enterprise data looks like a concern, but it really is not. For example, if I send GenAI your leave balance but no identifying information about you, there really is no/low risk. Of course, doing this right depends on the AI middleware.
    • Further, I have had school concerned over things like sending transcripts to AI. I would ask...are the students sending their transcripts to AI already, but in a less controlled way? Isn't providing controls, enhanced security, authentication and anonymizing a better approach? If you resist providing the functionality, your student will take less-safe paths in order to get value earlier.

    I hope this is helpful. Good luck in your Journey!



    ------------------------------
    Andrew Bediz
    Managing Director AI & UX
    Gideon Taylor
    ------------------------------

    Message from the HEUG Marketplace:
    ------------------------------
    Find, Review, and Engage with Higher Education-focused solution providers, products, and services using the HEUG Marketplace.
    ------------------------------

    Alliance 2026 Registration is Open!


  • 3.  RE: AI Governance and Regulation on school level (not just teaching!)

    Posted 07-30-2025 11:17 AM

    Yes! That was super helpful and interesting! Thanks so much for taking the time to write all that Andrew. It really makes way for many additional threads to be opened! I'll resist the temptation on that one for now but I'll be back :)

    For the purposes of this thread I'd like to go with these particular points 5 and 6. Btw, in point 2 you say "It is being embedded, is usually opt-out" - do you mean that we usually will or will not be able to opt out? My interpretation is that we will not be able to.

    • 5. In terms of developing, our approach is to make it transparent when AI is used and what data will be sourced in the use case. For example, we do this with our AI Job Posting Agent. It will tell the user, "Hey, I am about to send jobcode, position title, location, hours per week, salary range, etc to AI in order for me to write your job posting. Do you want to proceed?" This is a good pattern to observe and builds trust. This makes a lot of sense and I think Legal and Compliance teams will insist on this anyway, so best to get off on the right foot ;) Is flexibility being built in, too? So in this same example could I say, ask for everything except salary range to be sent?
    • 6. Re: Environment, I have not seen wide attention on this, but this one is tricky. For example, if the AI Agent saves 20 round trips to the web application and saves the user 30 mins, what is the net effect? It is hard to quantify. While AI does take power to run, it can save power in other ways. I have not seen a good study on the net effect here to date. Another interesting thought here is that energy use and data privacy can be competing priorities. A dedicated model is the most secure, but then you are not sharing hardware, so it is less conscious of resources. That is an interesting decision to tie break. This is a very interesting thought and I think that the answer in our case would be "hands down" security is priority and am pretty sure that it will be for most schools and organisations in general. So this means that AI is all the more detrimental to the environment and that organisations will have to up their efforts even further to counterbalance their carbon emissions.

    Would love to hear from other concerned schools on this and indeed have other developers chime in (bonus points for positive news ;) ).

    Sarah



    ------------------------------
    Sarah Clarke
    Process Governance & Continuous Improvement
    ESADE
    ------------------------------

    Message from the HEUG Marketplace:
    ------------------------------
    Find, Review, and Engage with Higher Education-focused solution providers, products, and services using the HEUG Marketplace.
    ------------------------------

    Alliance 2026 Registration is Open!


  • 4.  RE: AI Governance and Regulation on school level (not just teaching!)

    Posted 08-08-2025 12:37 PM

    Hi Sarah, 

    6. Btw, in point 2 you say "It is being embedded, is usually opt-out" - do you mean that we usually will or will not be able to opt out? My interpretation is that we will not be able to.

    I am speculating here, but history tells me that software vendors know opt-out beats opt-in handedly for adoption. You also see where vendors don't even allow opt-out. For example, a transcript processor that uses AI won't let you opt-out else the product won't work. By using the product, they take that as your agreement. However, you had no knowledge or say that the AI model used is shared between multiple customers! It requires a lot of research to catch these things, so it is daunting for customers.

     So in this same example could I say, ask for everything except salary range to be sent?

    Love your idea. If you don't mind, I am going to use it in coming product roadmaps! I have not seen that pattern, but why not have the default data set be visible and allow each user to refine that list before sending. Great idea!



    ------------------------------
    Andrew Bediz
    Practice Lead AI & UX
    Gideon Taylor
    ------------------------------

    Message from the HEUG Marketplace:
    ------------------------------
    Find, Review, and Engage with Higher Education-focused solution providers, products, and services using the HEUG Marketplace.
    ------------------------------

    Alliance 2026 Registration is Open!