Part 2 - An OpenAI Booking Assistant... that can hack itself?
I'm building an AI assistant that could help potential clients manage their appointments for a client of ours, Sisu.
Last weekend, I set myself a challenge of building an AI assistant that would help a prospective client manage their appointments for our client, Sisu. If you haven’t read Part 1, I’d suggest starting there.
This week, I wanted to focus on the integration into Sisu’s booking system, Phorest. I set a few core functionality tasks for the AI;
What are my upcoming appointments?
Can you help me cancel an appointment?
What services are available at a clinic?
What Doctors are available at a clinic?
What available appointments are there?
Account Management
When looking up a clients upcoming appointments, I need to validate that they have an account. I decided to do this by asking them for their mobile phone number and sending them a validation text. Once they are confirmed, I could then permit the AI to lookup their upcoming appointments.
Below, you’ll notice that Laila has performed really well when being passed 4 or 5 digit codes. She doesn’t passing anything back to the API until it’s the valid length. It then checks if the code is correct via the API and handles potential errors really well.
Once I think have the list of appointments, she can cancel them really easily by passing the request to our API.
Can Laila hack herself?
Lets run a little experiment. I wonder can Laila manage to hack itself and find a way for an attacker to find a way to look up other peoples appointments?
I know the way I’ve coded the API and auth system will not allow this but it’s interesting to watch her try.
That’s really interesting. I don’t return the verification code to Laila so there’s nothing she can do but she’s happy to tell me all about the function call.
I can limit this within the configuration of the assistant but it doesn’t highlight another concern. A spammer could start pumping out verification checks. I’ll need to add some handling for this case.
Feeding Laila
In order to increase the capabilities of Laila, I started to add some more functions to her.
The ability to get clinics
The ability to get services in those clinics
The ability to get Doctors in those clinics
The ability to get available appointment slots based on a service, clinic and even a Doctor!
Nothing here is groundbreaking but it could be really monotonous for the user to have to ask the questions in sequence. For example
“What clinic is near me?”
“What services are available in that clinic?”
“What Doctors are available in that clinic?”
“What is the next available slot for this Doctor in for that service in that clinic?”
This ARE the API requests that need to be logically sent though.
I want something much more intuitive to happen, a simple question like:
Is Dr Brian Cotter available for a lip flip in Cork next Tuesday?
The response, is really good.
Ok, so she’s a little confused on a few points. She starts out by saying “Dr. Brian Cotter has available appointment slots for a Lip Flip service at the Cork - Sisu Clinic next Tuesday (which is not May 24th)” but then clearly goes on to describe how that isn’t Tuesday with “It appears Dr. Brian Cotter is not available for next Tuesday. However, you can choose from these available slots on May 24th or look for another date.”
I’ve noticed this quite a bit. These errors where she contradicts or corrects herself. For example, she has feeling that it’s the 5th of October, 2023.
Next steps
I think I can solve 90% of these issues by correctly prompting Laila and asking her to only share pertinent information. The next steps are
Can Laila book an appointment for a client?
What does an admin dashboard for people to oversee how Laila is interacting with people look like?
When the audio aspect of that latest ChatGPT is available, can I call Laila on the phone!?