ServiceNow have just started offering some tools for ChatGPT integration. Some of these fall under their IntegrationHub Pro offering. Its well worth checking out the new official options in my opinion.
I thought I would try to setup my own integration with ChatGPT on a personal instance a while ago and just got round to it. I thought I’d document the process here for if anyone was interested.
I’ll write a few of these articles as there was an idea that I had which I thought might be useful. What I am trying to achieve is the ability to ask ChatGPT to write a script, then have ServiceNow create the script on the platform.
To do the initial setup, do the following:
Create a ChatGPT API Key
Open the following link and create an API key. https://platform.openai.com/account/api-keys
As an FYI, the key is separate from any ChatGPT plus subscription you might have – it will likely come under a new billing process. Once you have created the key, note it down and continue on with creating a REST message.
Create a new REST Message
In ServiceNow, open “REST Message” under System Web Services.
Create a new REST message. Enter the following details:
- Name: ChatGPT
- Endpoint: https://api.openai.com/v1/chat/completions
- Open the “HTTP Request” tab. Create two new HTTP headers as follows:
Name | Value | Example |
Authorization | Bearer [API Key] | Bearer sk-xyzxxxxxxxxxxx |
Content-Type | application/json |
Create a new “HTTP Method” with the following details. You can delete the default GET.
Name | Http Method | Endpoint |
POST | POST | https://api.openai.com/v1/chat/completions |
You should now have the bones in place to send the messages, now we need to write some code to submit the requests.
Create Script Include
We will now create a Script Include that can be used to process ChatGPT requests. Below is the initial code I have used.
Name | API Name (automatically generated) |
ChatGPT | global.ChatGPT |
var ChatGPT = Class.create(); ChatGPT.prototype = { initialize: function() { this.model = "gpt-3.5-turbo"; // Uncomment the following line if you want to use "gpt-4" model // this.model = "gpt-4"; // Note: There is a waitlist for this. gs.info("ChatGPT instance created with model: " + this.model, "ChatGPT"); }, // Sets the premise for the chat setPremise: function(premise) { gs.info("Setting premise: " + premise, "ChatGPT"); return this.createMessage("system", premise); }, // Creates a message object with role and content createMessage: function(role, content) { gs.info("Creating message with role: " + role + " and content: " + content, "ChatGPT"); return { "role": role, "content": content }; }, // Submits chat messages to the model applied in this script include submitChat: function(messages) { gs.info("Submitting chat messages: " + JSON.stringify(messages), "ChatGPT"); try { // Create a new GlideHTTPRequest instance and set the endpoint URL var request = new sn_ws.RESTMessageV2("ChatGPT", "POST"); request.setHttpMethod('POST'); // Set the payload including model, messages, and temperature var payload = { "model": this.model, "messages": messages, "temperature": 0.7 }; // Log the payload for debugging purposes gs.info("Payload: " + JSON.stringify(payload), "ChatGPT"); // Set the request body request.setRequestBody(JSON.stringify(payload)); // Send the request var response = request.execute(); // Get the response status and content type var httpResponseStatus = response.getStatusCode(); var httpResponseContentType = response.getHeader('Content-Type'); // If the request is successful and the content type is JSON if (httpResponseStatus === 200 && httpResponseContentType === 'application/json') { gs.info("ChatGPT API call was successful", "ChatGPT"); return response.getBody(); } else { gs.error('Error calling the ChatGPT API. HTTP Status: ' + httpResponseStatus, "ChatGPT"); } } catch (ex) { // Log any exception that happens during the API call var exception_message = ex.getMessage(); gs.error(exception_message, "ChatGPT"); } }, type: 'ChatGPT' };
A bit around the functions:
Function | Notes |
setPremise | Can be used to set the premise of a conversation. For example, you could want ChatGPT to reply in a certain style, or in a certain format. The premise could be something like, “You are speaking to a non-technical user so any answers should be summarised for that audience”. |
createMessage | Used to create the message you are about to send, with two variables; role and content. Generally this is to aid with conversational context which I’ll talk about in future. To use it, call the function with the role as “user” and the content as the message you want to sent. |
submitChat | This function sends the message to the ChatGPT endpoint using the REST message we defined earlier. It takes an array of messages, so you can use the createMessage function and send that though, or use the setPremise function initially to set the premise of the chat and send a message after etc. |
Testing the code
To test if the code works, you can create a fix script. Here is an example that sets the premise that ChatGPT is a comedian and we can ask for its thoughts on rainy weather.
// Create an instance of the ChatGPT class var chatGPT = new global.ChatGPT(); // Set the premise for the chat with the assistant. The premise helps set the context of the conversation var premise = chatGPT.setPremise("You are a comedian and you love to make people laugh. Your responses should be comedic"); // Create a user message asking the assistant to write a ServiceNow fix script to query for active users. var message1 = chatGPT.createMessage("user", "What do you think about rainy weather?"); // Submit the chat to the GPT-3.5 Turbo model (default). The chat consists of the premise and the user's request. // The 'submitChat' function accepts an array of messages which form a conversation. var result = chatGPT.submitChat([premise, message1]); // Print the result. This will be a JSON object as per the premise set for the chat. gs.print(result);
You should have a payload like this:
{ "model": "gpt-3.5-turbo", "messages": [ { "role": "system", "content": "You are a comedian and you love to make people laugh. Your responses should be comedic" }, { "role": "user", "content": "What do you think about rainy weather?" } ], "temperature": 0.7 }
You should get a response like this:
{ "id": "chatcmpl-XXXXXXXXXXXXXXXX", "object": "chat.completion", "created": 1686498256, "model": "gpt-3.5-turbo-0301", "usage": { "prompt_tokens": 38, "completion_tokens": 56, "total_tokens": 94 }, "choices": [ { "message": { "role": "assistant", "content": "Rainy weather? Oh, it's the perfect time to stay curled up in bed all day and pretend like you have a life. Plus, it's the only time you can use the excuse \"sorry, can't go out, it's raining\" to avoid social situations." }, "finish_reason": "stop", "index": 0 } ] }
As you can see, ChatGPT sent a message back with the role of “assistant”. I hope this helps! I’ll be writing more articles around this with an aim to get the automatic code deployment working.
Great article on the Servicenow ChatGPT integration! I found it to be thorough and insightful, shedding light on the benefits and potential use cases of this integration. The step-by-step guide provided is extremely helpful for anyone looking to implement this integration in their own organization. Keep up the great work in providing valuable content like this!
– GPTOnline
Do you mind if I quote a couple of your posts as long as I provide credit and sources back to your blog? My blog is in the same niche as yours, and my users would benefit from some of the information you provide here. Please let me know if this ok with you. Thank you.
Of course – feel free!
This passage discusses the integration of ChatGPT with ServiceNow. It outlines the setup process, including creating a ChatGPT API key, configuring REST messages, and creating a script to interact with ChatGPT. The code provided demonstrates how to set the premise, create messages, and submit requests to ChatGPT.
What an insightful article! I never knew the potential of integrating ServiceNow with ChatGPT until I read this piece. The step-by-step guide provided here is incredibly helpful and makes it seem like a breeze to implement. I appreciate the clarity and depth of information provided, it really helps in understanding the benefits and possibilities of this integration. Great job, KB Article! Looking forward to more informative content like this. CGPTOnline
ServiceNow’s venture into ChatGPT tools, especially under IntegrationHub Pro, signifies an evolution in platform capabilities. Integrating ChatGPT for script generation further enhances automation. However, distinguishing ChatGPT API billing from subscriptions is crucial for users. This endeavor showcases the confluence of AI & ITSM, paving the way for innovation.
ServiceNow’s recent foray into ChatGPT integration, particularly within their IntegrationHub Pro suite, is a commendable development. Exploring these official options is advisable. The author’s proactive approach in documenting their personal ChatGPT integration journey holds promise. This endeavor, aimed at streamlining script creation through ChatGPT and ServiceNow, could prove highly beneficial. Eagerly anticipating further insights on this innovative venture.
It’s good informations
This integration sounds incredibly promising! Combining ServiceNow’s robust workflow automation with ChatGPT’s advanced conversational AI capabilities could truly revolutionize how businesses handle customer service and internal support. I’m excited to see how this will streamline operations and improve response times. Has anyone had the chance to implement this yet? Would love to hear about your experiences and any tips you might have!
The integration of ServiceNow with ChatGPT is an exciting development! Leveraging ChatGPT’s AI-driven conversation skills within ServiceNow’s powerful service management framework could significantly enhance efficiency and user experience. I’m curious to see how this blend of technologies will transform customer interactions and automate support tasks. If anyone has started using this integration, I’d be interested to know how it’s impacting your operational workflows!