
How I Integrate AI in My Daily Work
AI is present in my day-to-day life and helps boosting creativity, accelerating delivery, and ensuring alignment with broader business goals. Below are a few concrete examples of how I leverage AI:
​
1. Accelerating UX/UI Ideation and Backlog Creation
- I have used conversational models (ChatGPT, Claude) to brainstorm user flows, prioritise features, and align design backlogs with company OKRs.
- Example: In my previous experience at comparis.ch I created a load of possible Product design improvements, that then were analysed by me and the PO in order to select the most viable candidates for A/B testings.
2. Front‑End Collaboration with Node.js & CapacitorJS Teams
. AI‑assisted code review and snippet generation (e.g., GitHub Copilot) speeds up development of shared components in our Node.js micro‑frontend architecture.
- For hybrid mobile apps powered by CapacitorJS, I create front end code with copilot that matches my designs (app example here), integrated directly into the main GitHub branch helping Developers focus on back end tasks and with my attention to detail creating seamless visual apps and cutting handoff friction.
3. Rapid Infographics & Data Visualisations
- When preparing slide decks under tight deadlines, I feed raw data into AI‑powered charting tools (e.g., ChatGPT integration or DALL·E for stylised backgrounds).
- Within minutes, I have polished bar charts, heat maps, or diagrams. These auto‑generated assets free up time for narrative refinement, ensuring every slide tells the right story.
4. Creative Asset Generation & Fine‑Tuning
- Image generation (Adobe Firefly, DALL·E) for speeding up some concept art or mood boards, followed by “Generative Fill” in Photoshop to patch odd croppings or extend backgrounds. In fairness some need manual work after it, but speeding up definitely helps.
- Video and voice‑over creation (Synthesia, HeyGen) to prototype interactive tutorials or onboarding sequences without scheduling human shoots.
5. Micro‑copy & Presentation Polishing
- AI‑driven copy tools to generate web‑page headlines, button labels, and error messages that I then A/B test with real users.
- Presentation tidy‑ups, like suggestions for concise bullet points, and auto‑format to our corporate template, easily saving hours on each deck.
​
​
See below some slightly more extensive AI related projects.
Project 1 - VR App
This first example was done for an App we were developing at Mindhealth VR, the primary purpose is to create a serene and supportive safe-space within the virtual world, designed to provide comfort and relief for people experiencing mild to moderate mental health challenges, such as anxiety and stress.
​
After developing most of the relaxation side of the app, we decided we would like to include various audio tracks, from guided meditation to sound baths. After some testing, we realised the hand interactions and controllers of VR headsets were still quite confusing for new users, so I decided to implement a voice command option. It then evolved to become the strongest selling point of the app, with all sorts of voice interactions, including real-time AI-powered conversations to practice challenging topics in the workplace and other training models. More information below on how it all started...
Project 1 - VR App
This first example was done for an App we were developing at Mindhealth VR, the primary purpose is to create a serene and supportive safe-space within the virtual world, designed to provide comfort and relief for people experiencing mild to moderate mental health challenges, such as anxiety and stress.
​
After developing most of the relaxation side of the app, we decided we would like to include various audio tracks, from guided meditation to sound baths. After some testing, we realised the hand interactions and controllers of VR headsets were still quite confusing for new users, so I decided to implement a voice command option. It then evolved to become the strongest selling point of the app, with all sorts of voice interactions, including real-time AI-powered conversations to practice challenging topics in the workplace and other training models. More information below on how it all started...

Objective
In this specific example, I wanted to find a way to recognise a voice command and, consequently, trigger an action (ie. to play an audio track). The main outcome would then be tested by users (1:1 interviews) and understand if it helps their interaction.
​​
​
First step - Find the right tech (with scalability in mind)
​
What packages need to be included in Unity for voice recognition?
- As we were developing for meta, this was a simple search: meta all-in-one SDK includes Voice SDK, that allows us to connect to microphone and sets up the path for this to be possible
​
Will a package be enough, or do we need more than that? How much can we achieve just with C# and Meta's Voice SDK package?
- The mentioned package was a bit limited, as from a standpoint of scalability, we would like to have an AI model that we could train, for future use, if we want to create live Q&A and other uses.
Can we use any tools, or do we have to create a way to train an AI model on our own? Would that be even worth it at this stage?
- These are all valid questions of cost/effort vs outcome. In this specific case we found Wit.Ai, also developed by meta, free to use for testing. This would avoid compatibility issues and poor documentation, as we could find examples where this was already in use. Bonus points for it having a nice UI interface to help us train the AI model. This was all we needed to proceed, along the lines of, fail fast, iterate, test, learn.
​​
​
Second step - Connect to the API
​
As usual, even though there's good documentation, a lot of it is not up-to-date, or doesn't really work with our desired outcome. To create the perfect script connection script, I used perplexity.ai. It was extremely helpful adding debug logs to every step of the way, and allowing me to fix the issues step by step with its help.
​
Some of the tasks for this included:
- Connecting Wit.Ai to Unity via Meta's plugin with the right server address, ID, etc.
- Setting up Unity's components in a way that could work with Wit​.ai​
​​
​
Third step - Set VR app and code
​
This was definitely the most challenging part, even with a well set file and API, there are a lot of little settings that need to be configured:
- A script to help with Wit Runtime configuration
- A script to configure the voice listener
- Input action assets, correct GameObject structure, etc.
​
For the code, as explained above, perplexity helped a lot, as this falls a bit out of my comfort zone (for now). It helped, for example, coming up with the right listeners, code structure and also proper to debug bit by bit:
wit.VoiceEvents.OnResponse.AddListener(HandleVoiceResponse);
wit.VoiceEvents.OnPartialTranscription.AddListener(HandlePartialTranscription);
wit.VoiceEvents.OnFullTranscription.AddListener(HandleFullTranscription);
wit.VoiceEvents.OnError.AddListener(HandleError);
wit.VoiceEvents.OnStartListening.AddListener(HandleStartListening);
wit.VoiceEvents.OnStoppedListening.AddListener(HandleStoppedListening);
​​
​
Fourth step - train model, test and prepare for wider tests
​
This part was heavily helped by Wit.Ai's interface, where we can add utterances (what has to be said), intents (what to call when it is recognised), synonyms or similar words, etc. In the beginning I was skeptical, as it was slow, and it didn't understand anyone very well. But after training it better and removing words that didn't make sense, on top of also adding similar words for better recognition. For instance, 'sound bath' was the needed phrase, so 'sound booth' would also work, and after some more of this and other understanding training, this worked for most accents and people.
​
We still had a big barrier, the fact that it's always listening was a problem for privacy and performance. The solution was to activate the listener via a controller button, or a hand gesture. It helped massively with performance, and we were ready to test!
​
Outcome
​
You're probably thinking... so in the end you still need a controller or some fiddly hand gesture?
Well the tech might not be there just yet for it to be always on, having a full language model on local could be a solution, but then we ran into app size issues and local processing from the headsets. Maybe we just need to come up with a smoother solution to work as the trigger, either way, this has been really well received by all partners and users as a new feature most competitors don't have. It obviously comes with its challenges, but it worked perfectly for the objective of being ready to test, and we are still learning a lot from it!
​
Next Iterations
​
This project was the beginning of voice use with AI integration, we then created AI Real-time conversations with ChatGPT integration for practicing difficult conversations and training modules. Some of those conversations can be seen in the video right above this section.

Project 2 - AI chat bot
This project (image above) was from a more UX/UI perspective as there was a Data team tasked with training the AI model. So my main tasks were to support the PM responsible for this project. Some of my contributions included:
- Helping define the AI model's personality in line with our brand and tone of voice guidelines
- Design the look & feel for the conversation window, and define all possible user interactions and scenarios
- Aid set up of prototypes and user testing plans for validating design and general usability
- Iterative approach to design and project rollout
- Continuous product design improvements according to main tracked events in Google analytics and 'give feedback' interaction​
Final thoughts and learning outcome​
​
I truly believe the rush for AI-powered solutions might bring amazing technology, but we still have a lot to learn. This project was another stepping stone in this journey. I learned a lot in terms of what barriers we can encounter in older corporations, that rely a lot on the core values and how they're perceived, this resulted in a very tame model language to start with, and it will get better, but being extremely careful with content, and using mostly content from their articles, ended up being quite restrictive.