Webhook Information

The most important thing to do when setting up the CRM integration is properly fill out the webhook step that you're using! There are a few important fields that you ALWAYS need to fill out in every webhook step that you use and many optional fields you can fill out as well! Check back here every once in a while to see if we've added any new optional fields, as we're constantly adding new features! Also remember that you can indeed use GHL keys in the values for your webhooks, so for instance you can set up a custom field for the prompt, and then enter the key for that field in the prompt data point in your webhook!

Remember that each different webhook step can have different data points depending on your needs!

Webhook Type, Name, Method, and URL

The very first thing to do when setting up your webhook is making sure that you have the settings done properly. First, the webhook needs to be a normal webhook, not a premium one. The name for the webhook can be whatever you want to help you remember what the webhook's purpose is, but the "method" needs to be set to "POST". Finally, the URL needs to be exactly, "https://systems.capriai.us/gpt3", without the quotation marks! If the URL is incorrect, the webhook will fail to fire, which you can check in the workflow history!

Required Custom Data Points

There are a few custom data fields that absolutely need to be filled out in every single webhook that you use, though we'll go over the different options for them here! The name of these sections will be what you enter as the "key" of the data point, and the description will tell you what you should enter as the "value"!

prompt

Here you need to fill out the prompt that you wish the AI to use when this particular webhook is triggered. Remember, you can use a different prompt in each webhook to suit your particular needs!

key

The key should be the API key that you got from OpenAI at https://platform.openai.com/account/api-keys, but remember, the key here needs to be the full key like you copied into the portal settings. Don't make the mistake of copying the shortened version of the key, since you can only see the full version once! If you have clients using Capri, you can also have them use their own API key here so save on costs, just remember to have them add a payment method to their OpenAI account!

model_type

Here you can choose the AI model that you want to use. Currently, we only support Davinci, which should be entered as "davinci", ChatGPT, which should be entered as "chatgpt", and GPT4, which should be entered as "gpt-4", but we will be integrating with more models in the future! For now, we recommend using ChatGPT, GPT4 right now has about half the rate limit that ChatGPT does, which can easily cause issues! If you want to use larger prompts or have larger tag prompts so that you can give the AI specific examples, you can also use the 16k ChatGPT model by entering "gpt-3.5-turbo-16k" as the model! This will give you 16k tokens to work with rather than the standard 4k!

temperature

The value for the temperature should be between 0 and 1, and we recommend that you use the same temperature that you test with in the emulator, so around "0.08" or "0.11"!

max_length

For the max length, you should just enter in the same max length that you have in the portal settings, so around "180", give or take, though remember, this doesn't actually put a limit on the response the AI generates, so if you have it too low, the AI can get cut off!

presence_penalty or frequency_penalty

You also need to include either a "presence_penalty" or a "frequency_penalty" data point, though you don't need to use both. We recommend using "presence_penalty" with a value of "0.8" or so, but it can be any value between 0 and 1

task

The final required data point is "task". Here you can enter "respond", "outreach", "evaluate", or "custom" for the value, and we'll go over those in a later page!

Optional Custom Data Points

These data points are absolutely optional, but you can add some of these to greatly expand the capabilities of the AI and the Capri system in general! Some of these, though, increase the token count being used in your webhook, and as such, if you use too many of them, say multiple custom tags, you can risk overloading the OpenAI system which will cause your AI to fail to respond! Remember, the total token count, including the prompt, the conversation history, the spreadsheet if you arent using the knowledge base extension, and certain webhook data points can only be 3000 or less before it overloads!

channel

If you want the AI to only respond to the contact via a specific channel, you can enter it here! Currently the only supported channels are "SMS", "GMB", "FB", and "IG", which are SMS, GoogleMyBusiness, Facebook, and Instagram DM's, respectively. If you don't put this data point, the AI will, by default, reply to the contact via the channel the incoming message was in!

knowledge

Here is where you need to enter in the URL for the spreadsheet that you are using, if you are using one! Each webhook step can only access a single spreadsheet, but if you have multiple webhooks for different situations, you can give them different spreadsheets relating to their individual situation!

fallback

The two options that you have for this data point are "silent", which will be the default behavior if you don't have this data point and causes the AI to remain silent whenever it reaches a fallback and add a tag of "fallback_reached" to the contact, or "respond", which will have the AI try to respond to the contact as well as add the "fallback_reached" tag!

handoff

The handoff function is where you can specify the conditions that you want met for the AI to remain silent and add the "handoff" tag. This is one of the data points where, if you use it, it will increase the amount of tokens being used in your webhook, usually between 100-300 tokens depending on how complicated the value is. There is a separate page with more information about how this works!

disqualify

This function is where you can specify conditions that, when met, the AI will consider the contact to be disqualified, and it will stop interacting with that contact and add the "disqualify" tag. This works much the same as the "handoff" function, and as such, you can find more information about it in the handoff page, and it will also use extra tokens.

tag:tagnamehere

This is where you can use the custom tag function! This allows you to set a condition or conditions that, when met, adds a tag to the contact. More information is available in the Custom Tag page, and this function uses extra tokens.

goal

If you want the AI to have the capability to check a calendar to do things like book an appointment or offer times, you need to add a data point with a key of "goal" and a value of "booking". When this is added, the AI will check the calendar you have linked in your portal settings or that you've specified in the webhook step, though this will take additional tokens. If you have told the AI to book appointments and offer times and the like without this data point, it will just make up times!

booking_action

If you want the AI to have the capacity to actually book your appointments itself, rather than just offer times and have a team member book the appointment or have the AI offer a booking link, you can add this data point with a value of "auto". By default, the AI will book into the calendar that you have set in your portal settings, but you can designate a different calendar by using a data point that we will cover shortly! Be aware, the booking action uses around 1k tokens, so if you're using this in the same webhook that you have many custom tags or the like, you can easily overload the AI and cause it to not respond anymore, and in this case, you can try seperating out your workflows based on which tags the contact has!

calendar

If you want to have the AI access a different calendar than the one you have in your portal settings, say for instance you have different webhooks depending on what service the contact is interested in, you can add this data point and have the value be the ID of the calendar you want to use. You can find this calendar ID by copying the permanent link from the calendar, pasting this somewhere, like the address bar of your browser, and then copying the last part of the URL!

history

This function allows you to limit the conversation history that the AI can see and respond to! By default, if you don't have this data point, the AI will be sent the entire conversation history every time a new response comes in, even things that were sent months ago! This can cause issues long term, but with the "history" function, you can limit this. If you have "exclude" as the value, the AI will only ever be sent the message that triggered the webhook. This is useful if you are creating something like a Q/A bot or a knowledge repository! If you want to limit the AI to only being sent the conversation from a certain time period, you can enter a number value instead, with whole numbers representing hours, and decimal numbers representing percentages of an hour. For example, a value of "2" would have the AI read the last two hours every time a message comes in, a value of "0.5" would have the AI read the last 30 minutes, or half hour to be specific, and a value of "0.16" would be the last 10 minutes. Having a limit like this, whether it be a few hours or a few days, can greatly help the AI not run into excess token issues long term!

actions_model

Different models can have different strengths or weaknesses. For instance, GPT4 is significantly smarter in how it can detect things like conditions, but it's also more expensive and has a low rate limit. If you want to take advantage of this increased ability to parse conditions, but don't want to risk hitting the decreased rate limit, you can choose to activate a different model for "actions", like the handoff prompt, the booking check, etc., than the main completion! You just need to have a data point with a key of "actions_model" and a value of the model you want to use, so either "gpt-4", "davinci", or "chatgpt". If you want to use the same model for everything, you can just not include this step and it will default to using the same model for everything!

actions_api_key

If for some reason you want to use a different API key for the actions compared to the main fulfillment, you can do so with this data point! This could be because, for instance, you have access to GPT4 while your client doesn't! In this case, adding the "actions_api_key" and "actions_model" would allow the AI to use the GPT4 model for actions, even though your client normally couldn't

language

If you're using Capri in a language other than English, you might have some issues with the "custom tag" features or the booking functionality. This is because our system prompts and everything in the backend is in English, while your main prompt and conversation might be in something else! To help solve this, you can use the "language" data point to have our system prompts translated to the specified language! Bear in mind, though, that the names of the languages need to be the English names rather than the native names! For instance, the value would be "German" rather than "Deutsch" or "Swedish" rather than "Svenska"!

timezone

By default, Capri will book appointments using the timezone that is set for the subaccount that you're using, but if you want to specify a different timezone, you can by using this data point! For the value, you would just enter in the name for the timezone as it appears in GHL, not including the offset! For instance, for Eastern time, it would be "America/New_York" and for AEST time it would be "Australia/Brisbane"!

training

If for some reason you want the AI to not use any of the training sessions from the emulator, you can use this data point! If you have this with a value of "exclude", the AI will not be given any of the trained data when generating responses!

Last updated