Generative AI refers to artificial intelligence tools (e.g., ChatGPT, Gemini, Meta AI, and Grok) that involve large language models built from vast pools of information using machine-learning algorithms. The tools can produce seemingly thoughtful and well-informed answers to a wide variety of questions and requests. It is increasingly common for professionals to use these tools in their work to save time at multiple stages of a business project, such as identifying and summarising relevant information, collecting and analysing data, and preparing reports and presentations. Nevertheless:

I GUARANTEE THAT I DO NOT AND WILL NOT USE GENERATIVE AI IN MY WORK.
Why not? Instead of describing my own reasons (why should you care?), I’ll list some of the reasons why my AI-free guarantee is valuable to organisational clients.
Sourced from mindonthejob.com
1. You hire for expertise
With the right inputs, generative AI tools can spit out document summaries that seem informative, data analyses that appear to answer research questions, and reports that look professional — often in less time than it takes to prepare the prompt.
But how much can someone requesting a summary understand about the document they didn’t read? How sure are they that data analyses they didn’t conduct are valid and meaningful? How willing are they to stand by the contents of a report they didn’t write?
Yes, technology can be useful, but I believe there is genuine value to having a skilled professional undertake these tasks themselves. As someone who has had to summarise complex findings from my own research, I know how much important detail and nuance is absent from a summary (even one prepared thoughtfully). My research experience has taught me that analysing data is rarely straightforward — an approach that gives meaningful results in one context may be misleading in another. And the process of writing always helps me clarify and refine my thoughts and ideas. Better understanding means better decision-making and more valuable outcomes.
An AI-free guarantee means you get the benefits of your consultant’s expertise throughout every stage of a project.
2. You expect care and responsibility
Large language models work by identifying and mimicking patterns. They produce content that looks similar to the content on which they were trained. The results often look convincing — but they can also be inaccurate.
It’s easy to find articles about problematic AI outputs. The problems can range from minor inaccuracies to misleading statements to entirely hallucinated source material or harmful recommendations. It’s not a small risk, it’s something that keeps happening, even in work done by large consulting organisations.
- AI Gone Wrong: The Errors, Mistakes, and Hallucinations of AI (2023 – 2026)
- Deloitte to partially refund Australian government for report with apparent AI-generated errors
- How AI is introducing errors into courtrooms
- Biggest AI Failures of 2024
These articles focus on the most obvious errors. But because generative AI works by using existing information to identify patterns, another consequence is that its outputs tend to reflect common misconceptions and systemic biases.
An AI-free guarantee is part of a broader commitment to exercise due care and responsibility. If you engage me, you can be confident that the sources I draw upon are real, the analyses I conduct are reproduceable, and the conclusions I draw are justifiable.
3. You require privacy and confidentiality
Many of my clients have required undertakings about privacy and confidentiality, because my work can involve access to sensitive data. This requirement goes beyond how I talk about the work and my clients, and includes the manner in which I keep client data secure.
Generative AI represents a risk to confidentiality because users can and often do supply sensitive information (e.g., documents with proprietary information or spreadsheets with identifiable details) as part of the prompting process, even though:
- these tools can incorporate users’ prompts and uploads to use in training future models, and
- companies who supply the AI tools can access and review prompts and uploads as part of their ongoing evaluation process.
Some professional versions claim not to do this, but (a) that claim only applies to specific versions of some tools, and (b) companies responsible for generative AI tools have already demonstrated an unwillingness to recognise the protections of copyright laws, so a little caution with sensitive information seems reasonable.
There are additional cybersecurity risks when whole systems are managed by AIs.
- How Private Are Your AI Prompts? What Small Business Owners Need to Know About AI Privacy Policies
- The Dark Side of AI: Cybersecurity Threats and Privacy Concerns
An AI-free guarantee means a whole chapter of risks you don’t need to worry about.
4. You have (or will have) sustainability obligations
Increasingly, businesses are being asked to account for their ethical and environmental sustainability, not only in terms of their own practices, but also those of their suppliers. Consider the growing pressure for organisations to avoid partners/suppliers who:
- Benefit from modern slavery,
- Have harmful impacts on the environment, or
- Engage in unethical behaviour.
How does this apply to generative AI?
Firstly, those “vast pools of information” I mentioned early on, which were used to train most of these tools, includes an enormous about of intellectual property that were obtained unlawfully (in breach of copyright rules). This includes unpaid access to and use of the work of many writers and artists, skilled workers who are already often underpaid — and who now have fewer paid work opportunities due to organisations choosing to source this skilled work from generative AI. Use of AI represents a partnership with organisations engaged in unethical practices.
- New York Times takes legal action against AI startup for use of content without permission
- Australian authors challenge proposal for AI to have exemption from copyright laws
- AI industry hits legal storm
Secondly, generative AI has an unusually large ecological footprint. To manage the increasing demand for generative AI, many new data centres are being built, with serious environmental consequences (mining the minerals, using water for cooling, and creating e-waste from the components that only last a few years). Running these centres takes a lot of energy — projections suggest data centres could be consuming 20% of global energy within 5 years. The United Nations and other NGOs have begun calling on governments to require organisations to report which products and services are supported by AI and to document the associated environmental impact. And that will mean not just you, but your suppliers.
- Artificial intelligence: How much energy does AI use?
- Why AI uses so much energy — and what we can do about it
An AI-free guarantee is a part of your supply chain that contributes positively to your overall record of ethical and environmental sustainability.
Ultimately, I believe suppliers who commit to an AI-free guarantee offer greater value to their clients.
(c) 2025 Ben J. Searle
