Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Monday, November 20, 2023

Using OpenAI in Go


 


In this post we will review how to use OpenAI API in Go. We will download an HTML file, and let OpenAI analyze it.


Let's start with the basic, we need to send HTTP requests both for downloading the HTML file, and for accessing OpenAI API, hence the function to handle HTTP requests is:


func sendHttpRequest(
method string,
fullUrl string,
headers map[string]string,
body interface{},
) string {
httpClient := http.DefaultClient
var bodyReader io.Reader
if body != nil {
bodyBytes, err := json.Marshal(body)
if err != nil {
panic(err)
}
bodyReader = bytes.NewReader(bodyBytes)
}

httpRequest, err := http.NewRequest(method, fullUrl, bodyReader)
for key, value := range headers {
httpRequest.Header.Set(key, value)
}
httpResponse, err := httpClient.Do(httpRequest)
if err != nil {
panic(err)
}
bodyBytes, err := ioutil.ReadAll(httpResponse.Body)
if err != nil {
panic(err)
}

bodyString := string(bodyBytes)

if httpResponse.StatusCode != 200 {
panic(bodyString)
}
return bodyString
}


Let's analyze a specific HTML file:


analyzeHtml("https://assets.bounceexchange.com/assets/bounce/local_storage_frame17.min.html")



and the analyze function is:


func analyzeHtml(
fullUrl string,
) {
htmlData := sendHttpRequest("GET", fullUrl, nil, nil)

openAiListModels()

guide := "provide the following in JSON format:\n" +
"1. length, int, the characters amount in the HTML\n" +
"2. scripts, boolean, does the HTML include javascripts"
openAiCompletions(&completionRequest{
Model: "gpt-3.5-turbo-1106",
MaxTokens: 100,
Messages: []*completionMessage{
{
Role: "system",
Content: guide,
},
{
Role: "user",
Content: htmlData,
},
},
Temperature: 0,
})
}


We download the HTML file, and then use OpenAI API to analyze it.

As a side note, we can list the models available for us:

func openAiListModels() {
headers := map[string]string{
"Authorization": "Bearer " + Config.OpenAiKey,
}
result := sendHttpRequest("GET", "https://api.openai.com/v1/models", headers, nil)
print(result)
}


The structure of the completion request is:

type completionMessage struct {
Role string `json:"role"`
Content string `json:"content"`
}

type completionRequest struct {
Model string `json:"model"`
Messages []*completionMessage `json:"messages"`
Temperature float32 `json:"temperature"`
MaxTokens int `json:"max_tokens"`
}



and an example output is:


{
"id": "chatcmpl-8MyaJSYyAu4CVheOnHeQaA4Z7ThRQ",
"object": "chat.completion",
"created": 1700486795,
"model": "gpt-3.5-turbo-1106",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "```json\n{\n \"length\": 2283,\n \"scripts\": true\n}\n```"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 705,
"completion_tokens": 20,
"total_tokens": 725
},
"system_fingerprint": "fp_eeff13170a"
}


This output can now be parsed by another code, and act automatically as required.



No comments:

Post a Comment