Earl Duque

6 minute read

The Generative AI Controller is releasing in the ServiceNow Platform Vancouver release. As the time of this post’s writing, it is currently limited availability, so if you want access to these plugins befoer they become generally available, please contact your account representive to inquire.

This post is a walk-through on how to get started with the Generative AI Controller, there is more to explore outside of this post but we wanted to get you started quickly!

Getting started

In this walkthrough, we will be utilizing OpenAI to connect your ServiceNow instance to the Generative AI Controller. Microsoft Azure OpenAI is also available to use (if you have access to an Azure key)

We also did a live-stream of this setup, which you can find here.

Obtain an OpenAI API Key

OpenAI API Keys are free to obtain and have a free trial usage tier for exploration.

  1. Navigate to https://platform.openai.com/signup and sign up
  2. Navigate to https://platform.openai.com/account/api-keys and click Create new secret key button

    Picture1.png

  3. Copy the key (Save it somewhere! They only show you the key once)

Store your key in a ServiceNow Credential

  1. In the Navigation filter, navigate to “Connection & Credential Aliases”
  2. Open the “OpenAI” record (ID = sn_openai.OpenAI)
  3. Under Related Links, click “Create New Connection & Credential”

    Picture2.png

  4. Paste your OpenAI Key in the “API Key:” field and press “Create”

    Picture3.png

Set required system property

  1. In the Navigation filter, enter sys_properties.list and press enter. The entire list of properties in the System Properties [sys_properties] table appears.
  2. Find/Filter the list for the system property with the name of “com.sn.generative.ai.provider” and open the record
  3. At the top, a banner will say:

    Picture4.png

    Click on “here”

  4. In the Value field type openai and save/update the record

    Picture5.png

Using the integration

Capabilities

  • Summarize
  • Content Generation
  • QnA (included for Now Assist capabilities)
  • Sentiment Analysis
  • Generic Prompt

Via Flow Designer

  1. Navigate to Flow Designer. On the top right, click “Create > New Flow”. Give the new flow a name.
  2. For “Trigger” decide how the flow will activate. For example, “whenever an incident record is created”
  3. For “Action” select Generative AI Controller spoke and then select one of the capabilities

    Picture6.png

  4. Provide text to the action either manually or from the trigger record.
  5. On the top right, click “save”
  6. Now test the flow by clicking “Test”

Via scripting

Summarize

(function() {
	
	try {
		var inputs = {};
		inputs['texttosummarize'] = ; // String 

		// Start Asynchronously: Uncomment to run in background. Code snippet will not have access to outputs.
		// sn_fd.FlowAPI.getRunner().action('sn_generative_ai.summarize').inBackground().withInputs(inputs).run();
				
		// Execute Synchronously: Run in foreground. Code snippet has access to outputs.
		var result = sn_fd.FlowAPI.getRunner().action('sn_generative_ai.summarize').inForeground().withInputs(inputs).run();
		var outputs = result.getOutputs();

		// Get Outputs:
		// Note: outputs can only be retrieved when executing synchronously.
		var response = outputs['response']; // String
		var provider = outputs['provider']; // String
		
	} catch (ex) {
		var message = ex.getMessage();
		gs.error(message);
	}
	
})();

Generate content

(function() {
	
	try {
		var inputs = {};
		inputs['topic'] = ; // String 

		// Start Asynchronously: Uncomment to run in background. Code snippet will not have access to outputs.
		// sn_fd.FlowAPI.getRunner().action('sn_generative_ai.generate_content').inBackground().withInputs(inputs).run();
				
		// Execute Synchronously: Run in foreground. Code snippet has access to outputs.
		var result = sn_fd.FlowAPI.getRunner().action('sn_generative_ai.generate_content').inForeground().withInputs(inputs).run();
		var outputs = result.getOutputs();

		// Get Outputs:
		// Note: outputs can only be retrieved when executing synchronously.
		var response = outputs['response']; // String
		var provider = outputs['provider']; // String
		
	} catch (ex) {
		var message = ex.getMessage();
		gs.error(message);
	}
	
})();

Generic prompt

(function() {
	
	try {
		var inputs = {};
		inputs['prompt'] = ; // String 

		// Start Asynchronously: Uncomment to run in background. Code snippet will not have access to outputs.
		// sn_fd.FlowAPI.getRunner().action('sn_generative_ai.generic_prompt').inBackground().withInputs(inputs).run();
				
		// Execute Synchronously: Run in foreground. Code snippet has access to outputs.
		var result = sn_fd.FlowAPI.getRunner().action('sn_generative_ai.generic_prompt').inForeground().withInputs(inputs).run();
		var outputs = result.getOutputs();

		// Get Outputs:
		// Note: outputs can only be retrieved when executing synchronously.
		var response = outputs['response']; // String
		var provider = outputs['provider']; // String
		
	} catch (ex) {
		var message = ex.getMessage();
		gs.error(message);
	}
	
})();

Q&A

(function() {
	
	try {
		var inputs = {};
		inputs['searchtext'] = ; // String 
		inputs['context'] = ; // String 

		// Start Asynchronously: Uncomment to run in background. Code snippet will not have access to outputs.
		// sn_fd.FlowAPI.getRunner().action('sn_generative_ai.qna').inBackground().withInputs(inputs).run();
				
		// Execute Synchronously: Run in foreground. Code snippet has access to outputs.
		var result = sn_fd.FlowAPI.getRunner().action('sn_generative_ai.qna').inForeground().withInputs(inputs).run();
		var outputs = result.getOutputs();

		// Get Outputs:
		// Note: outputs can only be retrieved when executing synchronously.
		var response = outputs['response']; // String
		var provider = outputs['provider']; // String
		
	} catch (ex) {
		var message = ex.getMessage();
		gs.error(message);
	}
	
})();

Via Virtual Agent Designer

The actions for the Generative AI Controller will actually be actions built directly into the Virtual Agent Designer user interface.

Frequently Asked Questions (FAQ)

  • What Generative AI providers are currently supported?
    • OpenAI and Azure OpenAI
  • What out-of-box use cases are provided by the Generative AI controller app?
    • Summarization, Content Generation, QnA, and Generic Prompt
  • What ServiceNow minimum version is compatible with the app?
    • Vancouver Patch 2+
  • What entitlements are required?
    • Inquire with your account reps
  • What builder interfaces have Generative AI capabilities?
    • Virtual Agent Designer, Mobile App Builder, Flow Designer, and scripting
  • Does my ServiceNow data leave my instance when using 3rd party Generative AI providers?
    • Yes. Please be aware that data and queries you make from the app are sent to OpenAI and Azure OpenAI. Note their data privacy policy and decide what usage policy best fits your organization.
  • Does the Generative AI Controller use Integration Hub to connect to OpenAI / Azure OpenAI?
    • Yes. There are embedded Integration Hub spokes built into the Generative AI Controller that connect to the third-party LLM service providers. LLM transactions from a SN production instance will be counted as Integration Hub transactions (except from sub-prod instances).
  • Is the Generative AI output moderated for unsafe or harmful content?
    • Yes. Using OpenAI as a provider, we automatically apply their moderation API. For Azure OpenAI, we also employ their home-grown moderation.
  • Is there a possibility that Generative AI output is inaccurate (e.g. hallucination)?
    • While we have minimized the possibility of hallucinations through selective use cases and prompt engineering, there is always a risk of inaccurate information in generated content. Thus, it is always recommended to employ human-in-the-loop review for such content.
  • Is it possible to modify/create my own prompt, temperature, or number of tokens?
    • Currently no. Prompts are currently read-only. Default temperature = 0. Default max_tokens = 500.
  • What models are currently supported?
    • GPT-3, GPT 3.5 turbo, GPT-4.
  • What is the cost per transaction calling the API?
    • It depends on the provider and the length of the output. As an example, OpenAI charges $0.002/1k tokens for GPT 3.5 turbo.
  • When using Azure OpenAI, I’m running into an error: “The API deployment for this resource does not exist. If you create the deployment within the last 5 minutes, please wait a moment and try again.”
    • Make sure your deployment name is entered as the model name
  • Is it possible to automatically maintain context between multiple questions?
    • Not at this time

Comments