top of page

Innovation & Regulation: How consent works when using Agentic AI in the context of Open Banking

  • Writer: Dermot Butterfield
    Dermot Butterfield
  • Sep 22
  • 7 min read
Person interacting with image of neural network

(This piece is adapted from Dermot Butterfield’s address at the M2 AI Summit in Auckland on 30 April 2025.)


We’re discussing consent in the data world and how it sits within the AI space in particular.


The first thing to be aware of is the broader context of new privacy legislation in New Zealand. We now have the Customer and Product Data (CPD) Act and the Information Privacy Principle (IPP)3A legislation (which comes into force on 1 May 2026) - which aim to give individuals greater ownership over their personal data. The idea is that customers can take their data out of a business because it is theirs. They also have a right to be forgotten, if they wish.


New Zealand has adopted a similar standard to those in Australia and Europe - it will apply to banks first, and later extend to other sectors such as telecommunications, energy, and insurance.


The obligations


The implications are massive. Historically, data was held by institutions and they used it as they wished.


But going forward, customers will have the right to interact with and direct the use of their data. And the complication is that once you start feeding that into AI and a customer comes along later and says, “I'd like to be forgotten,” how are you going to do that?


The type of information made available in these, starts at the very simplest piece – name, address, telephone number, contact details, obvious stuff that we have, but it goes further: Accounts, balances, transactions, payments details, scheduled payments, direct debits, insurance products; the price of those products, the rates they're getting. It is colossal information which has a huge opportunity to power some of the business functions that we're going to do in the future.


What the legislation actually says is, whether you receive information from a customer directly or indirectly, you must inform them that you have it.


Think about it: TV streaming apps, for instance, collect a lot of information on viewing habits and are able to identify viewers on the basis of that.


If you fall under this legislation — and most organisations will — you must disclose:


  • why you are collecting the data,

  • who will receive it and interact with it,

  • their contact details, and

  • the legal authority under which the processing occurs


From an AI-partnership perspective: if you are sharing data with another organisation to deliver value to your customer, that partner must also inform the customer that they are receiving the data.


Now consider data brokers, credit bureaus, and other intermediaries. Your customer will now be informed that you are sharing their data. In short: if personal data is being used — whether yours or that of your business — you must inform the individual directly.


This is where consent enters the data ecosystem. Consent is not optional; it is embedded in regulation. There are now clear rules around who has access and how that access is exercised.


Principles of consent


Consent, importantly, rests on five principles:


  1. Explicit — you must request it from the customer. They must affirmatively say “yes” or “no.”

  2. Time-bound — consent is not indefinite; data cannot be stored forever.

  3. Specific — consent must relate to a defined purpose, not a blanket authorisation to collect other data along the way.

  4. Purpose-limited — data must be collected to deliver a defined benefit to the customer, not simply because you want to accumulate it.

  5. Non-transferable — the fact that someone gave you their data does not mean you can share it with others without further consent.


Now back to the complications of building consent into AI.


AI is not necessarily a thought-based system. It's a mirror. It's going to reflect what we put into it. I like to think that it doesn't think, but it reflects what we teach it and we can see this because Open AI has got lots of legal cases around authors who found their work turned up in it.


Now imagine your bank transactions being fed into a similar model. What happens when someone can query “people like you” based on those inputs?


This is why, as we feed more financial or health data into AI systems, we must protect customers’ information. Because, just like television viewership data, financial behaviour can be used to predict race, gender, sexual preference, and political leanings — all of which are protected attributes. Netflix, from your viewing history, can infer far more than you might expect.


Eighty-seven percent of people can be uniquely identified without explicit personal details. As privacy researchers often remind us: your postcode, age, and gender can uniquely identify you in 87 % of cases.


Responsible AI


So what does AI and consent mean in practice?


Financial models require data to generate outcomes. But once an AI model is trained, there is no “undo.” AI does not easily forget.


How, then, do you enable customers to withdraw consent? Many say “just anonymise.” But anonymisation is not always effective. Blurring a photo is a classic example — but generative AI can now un-blur that image. The same applies to structured data.


That is why responsible AI means designing models that:


  • only train on consented data,

  • tag and isolate sensitive data,

  • mask transaction data, and

  • ask the second question: not just “can I collect and store this?” but “can I use it to train models that create real value for the customer?”


And think about transaction messages. How many times have you written something in a payment note — even as a joke? “Hey Sexy Boots” might be funny to you, but it is now stored alongside an account number and could turn up on future searches for the account number.


People have used transaction messages to be abusive to partners, to use it as a way to track and inflict pain. How are you ensuring such content is excluded from your training data?


This is not only about compliance — it is about understanding the risks of ingesting raw data without safeguards.


Agentic AI


So we say smarter, safer. I say if you're going to do it, start with the agent, start with Agentic AI.


Remove the tasks from the user, or as I keep telling my team, train the agent, not the model.


Build something that takes a task away from our customers before we take that data and feed it into the system.


Alleviate a task that's on that customer's ‘to do list.’ We talk but lazy tax that people sometimes have. That it’s too much effort to move electricity companies. It's too much effort to change insurers. Some of those difficulties are built in, we want it to be slightly difficult for them to leave but on the other side, as a company that can bring that information in, you can actually remove that burden from the customer. You can see that with a lot of electricity companies now or even the internet - if you want to get your fibre, click here to move. You can do that with financial services, you can do that with rentals, you can do that with anything that relies on making a decision for a customer.


Let the agent take the task of completing the forms away from the customers. Simplify those interactions. Isn't it easier to say do you want to save 200 bucks now, rather than say ‘hey fill in this form and we’ll let you know’?


This is not a fantasy, this is something we're building with our customers, hyper-personalised financial inclusion tools, transparency on decisions and consent dashboards. So you get the ability to show an understanding, communicate with your customer what you're doing with the interactions, how their data is being used and what are the services you provide.


Start with efficiency. Reduce the friction for your customers, reduce the difficulty in getting them on. Forty seven percent of people leave forms on the first go.


You remove that possibility. "Click here' to Connect to your bank account, connect to your energy company, and I can tell you what kind of service you can have. Pre-populate the form, give them two-button clicks and they can get through the form. Sixty eight percent abandon during the process. So overall about 83% of people drop out. But if you can automate that process, you can take tasks away from the customer in this space - they are three times more likely to complete.


Building resilience


Get to “maybe” faster.


Today, mortgage applications require customers to submit extensive documents. Those are reviewed manually, passed between departments, and — eventually — the customer hears maybe.


Resilience means turning that around: analysing incoming data rapidly, and showing customers within minutes whether they meet core requirements.


Then there's adaptive credit risks and adaptive scoring. These are AI models you can deploy because the customer is already in your ecosystem. They are using your services. This is where you give him the opt in to say, “hey, look, we want to be able to give you real time analysis on how you're doing. We want to be able to see how you're paying your bills. We want to see that you have the money to continue to rent this place. And in exchange for that, we're going to give you these services. We're going to give you a better price on this service, or in some cases will actually give you the service because we trust that you have the ability to afford it or pay us back.”


Trust and competitive advantage


Finally, consent is the opening line of your relationship with a customer. If they do not trust you, they will not share data.


Deloitte reports that 88% of people prefer to buy from brands they trust. Sixty-seven percent will click on a trusted site even before deciding whether to purchase.


Hyper-personalisation can shift loyalty — if you present a clear value proposition.


Just like bank managers who once asked “How’s your mum?”, digital systems can point to customers:


  • “Your wedding savings goal is $20,000.”

  • “You are currently at $13,000.”

  • “Your wedding is six weeks away. Would you like a short-term top-up to bridge the gap?”


That is personalisation with purpose: understanding goals, surfacing insights, and offering value in return.


Calls to action


Consent is not a form — it is a relationship. Think of it as a treaty between an organisation and its customers.


  • To technologists: build AI responsibly. Plan now for how consented data will shape your products and services.

  • To customers: understand how your data is used.

  • To policymakers: design rules now to avoid legal and ethical complications later.


If you’d like to keep the conversation going — contact Wych.

bottom of page