Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Wednesday, September 30, 2020

Application Security Testing Tools




Any globally public web site will be attacked. There various types of attacks:

  • DDoS - Distributed Denial of Service
  • Phishing attacks
  • ATO - Account Take Over, e.g. password break using brute force for common passwords
  • SQL Injection
  • and many more...

On this post, I would like to address other type of attacks: Inventory Denial, and Web Scraping.

Inventory Denial attack, depletes the stock by starting a purchase process, but never commits the transaction. A known example is a flights company web site, that allows ordering flights seats online. Once a seat is selected, it is reserved for the customer for a limited time, e.g. 15 minutes, allowing the completion of the payment transaction. An attacker could use bots to preserve the flights seats every 15 minutes. This would mean a catastrophic implications for the flights company, whose plane would departure empty.

Web Scraping attack, scans a site to retrieve valuable information and create an unfair advantage for competitors. For example, if I can find out the price of every item in my competitor web site, I can tune my web site items' price to be just a bit lower. 

Both of these attacks have a common attack vector: repeated requests to the web site from one of more web clients. Most of the protection solutions address this by identifying bots request. Assuming that the attacker uses a bot army to run many requests, the protection solution blocking the bots would prevent the attack. A question to be raised here is, what if the web clients are not bots, but instead, a cheap human labor army? But, lets focus, for now, on the bots.

Assuming that we are using a protection solution to protect our site. How can we check that it is indeed protection our site? For this we can use several tools:



Burp Suite


The Burp suite community edition allows populating a header with a value from a predefined list of values. This can be used to repeatedly send a request, without letting the server side understand that it is actually the same request.

To change a header, use the Burp Suite Community Edition, and then:
  • Open Burp Suite
  • Click on Proxy, Intecept
  • Change to "intercept is on"
  • Send the request that you want to reshape
  • Select the request on the Burp Suite
  • Click on Action, Send to Intruder
  • Change to "intercept is off"
  • Select the value of the header that you want to change, and click the Add$ button
  • Set Attack type to Pitchfork
  • Select the Payload tab
  • Choose a file containing a simple list of the values
  • Mark the URL encode characters checkbox
  • Click the Start Attack button


IPFuck


The IPFuck chrome extension allows simulating multiple clients IPs. This tests the protection solution ability to handle a distributed attack from multiple web clients.

Notice that the IPFuck cannot actually change the the source IP, as the communication TCP packets must be valid when sent to the next router. The IPFuck changes the source IP by adding an HTTP header of the client source IP, as if it was added by a proxy in the middle. When using IPFuck, you can selected which header you want to use: XFF, client-ip, and via.


User Agent Switcher


The User Agent Switcher chrome extension allows simulating various user agents, hence simulating different browsers. This tests the protection solution fingerprinting abilities.

Using this extension, you can simulate predefined user agents, and also custom user agents. The simulation is done by setting the "User Agent" HTTP header.


Scraper.io


The Scraper.io chrome extension allows sending multiple requests to a site, while changing one or more parameters in the request each time, and collecting back values from the response. This extension can be easily used for an inventory denial attack.

You can, for example, set the scraper Start URL as:
http://mysite.com/product_info?id=[1-100]

This would send 100 requests to get the first 100 products info.


Final Note


The tools listed here enable a simple test of a site without any code programming. They can be used together to perform a basic check for your protection suite.

A more enhanced test can be used by actually purchasing a bucket of real IPs, and using custom hand made programmed binaries.


Wednesday, September 23, 2020

Writing Files to the Google Cloud Platform Storage using GO

 


In this post we will review the steps required to write files into the Google Cloud Platform (aka GCP) storage using a GO application.


The application itself is simple:


package main

import (
"cloud.google.com/go/storage"
"context"
"fmt"
)

func main() {
client, err := storage.NewClient(context.Background())
if err != nil {
panic(err)
}
bucket := client.Bucket("mybucket")
object := bucket.Object("myfile")
writer := object.NewWriter(context.Background())
data := "this is my file content"
bytes, err := writer.Write([]byte(data))
if err != nil {
panic(err)
}
fmt.Print("wrote %v bytes", bytes)
err = writer.Close()
if err != nil {
panic(err)
}
}



We create a storage client, and write a text data to the file myfile into the mybucket bucket.

Running this application fails with the following error:


could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.


This is since we need to create a service account, and to grant it permissions to this bucket.


To create a service account, open the GCP console, select "IAM & Admin", Service Accounts. Then click on add a new service account, and create it. In this example we create a service account named test-demo-write.





Click Create, Next, and Done.
This means that we do not grant any special permissions in this scope, as we will grant permissions to a specific bucket later.


Next we create a key that will be used for identification as this service account. Select the account, and click on Create Key. Choose JSON, and save the JSON key file locally as my-key.json.





To grant permissions to the bucket, we open the GCP console, Storage, Browser. Then we click on the 3 dots icon next to the related bucket name, and select Edit Bucket Permissions.




Fill in the service account email, and select the role for accessing the bucket. I have created my own role with the following permissions:

  • storage.objects.create
  • storage.objects.delete





Now we can set an environment variable named GOOGLE_APPLICATION_CREDENTIALS to path of the my-key.json, and we can rerun our application, this time successfully.


Wednesday, September 16, 2020

A Trie GO implementation

 


The image is taken from the Wikipedia



I've recently implemented a trie implementation in GO. While doing so I've used some of the best practices that I've presented at the post: GO Access Control Best Practice, and I thought it would be nice to review the code based on the best practices.


The trie is based on a node struct, which represents a tree root and a tree root.


package trie

import (
"fmt"
"sort"
"strings"
)


type Node struct {
value string
count int
children map[string]*Node
}

func Produce(value string) *Node {
return &Node{
value: value,
count: 0,
children: make(map[string]*Node),
}
}



Notice the Produce function hides the internals of initializing the struct, which in this case includes creation of a map.

To build the trie, we add words to it. In this case the trie is built for sentences, and not for words, so each node's value is a string and not a character. Hence the Add method is as follows:


func (n *Node) Add(values []string) {
if len(values) == 0 {
n.count++
return
}

childName := values[0]
child, exists := n.children[childName]
if !exists {
child = Produce(childName)
n.children[childName] = child
}

child.Add(values[1:])
}



The Add method recursively calls itself until it add all the nodes to hold the sentence.


The last missing piece is the tree traverse, which as expected from a proper code, does not expose the internals of the implementation.


type Scanner func(path []string, count int)

func (n *Node) TraverseDepthFirst(scanner Scanner) {
n.traverseDepthFirst(scanner, []string{})
}

func (n *Node) traverseDepthFirst(scanner Scanner, path []string) {
newPath := make([]string, len(path)+1)
for i, value := range path {
newPath[i] = value
}
newPath[len(path)] = n.value

sorted := n.sortedNodes()
for _, name := range sorted {
child := n.children[name]
child.traverseDepthFirst(scanner, newPath)
}

scanner(newPath, n.count)
}

func (n *Node) sortedNodes() []string {
sorted := make([]string, 0)

for name := range n.children {
sorted = append(sorted, name)
}
sort.Strings(sorted)
return sorted
}



The trie traverse method uses the scanner design pattern, which is run in a depth first scan methodology. This allows, once again, to avoid coupling between the trie internal implementation, and the trie using components.


An example of usage is:


package trie

import (
"fmt"
"testing"
)

func Test(t *testing.T) {
root := Produce("")
root.Add([]string{"a", "dog"})
root.Add([]string{"a", "dog", "is", "barking"})
root.Add([]string{"this", "is", "a", "cat"})
root.Add([]string{"this", "is", "a", "dog"})
root.Add([]string{"this", "is", "a", "dog"})
root.TraverseDepthFirst(func(path []string, count int) {
fmt.Println(path, count)
})
}



And the output of the trie traverse is:


[ a dog is barking] 1
[ a dog is] 0
[ a dog] 1
[ a] 0
[ this is a cat] 1
[ this is a dog] 2
[ this is a] 0
[ this is] 0
[ this] 0
[] 0



List and Read Files from AWS S3 using GoLang



 

In this post we will review how to list files and read files from AWS S3 using GO.

We will be using the AWS SDK for GO: aws-sdk-go.

To access the AWS S3, you must use valid credentials. The default chain of credentials providers includes the following (quoted from the AWS SDK):

  1. Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles.

  2. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development.

  3. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production.


AWS documentation recommends using the 3rd method, as it is the best secured alternative, and also automatically manages the credentials. Note that this method can be used only when running your code on an AWS EC2 instance. Trying to run the code on a non EC2 with a managed role, would cause the following error:

panic: NoCredentialProviders: no valid providers in chain. Deprecated.


Let's look at the main logic: we will connect to AWS S3, list files on a specific folder, and then read the first file from the list, and print it to the STDOUT.

The main code is:



package main

import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"io/ioutil"
"sort"
)

func main() {
region := "us-east-1"

config := aws.Config{
Region: aws.String(region),
}
awsSession, err := session.NewSession(&config)
if err != nil {
panic(err)
}

s3Client := s3.New(awsSession)

folder := "my-folder"
bucket := "my-bucket"

files := list(s3Client, bucket, folder)
bytes := read(s3Client, bucket, files[0])
fmt.Printf("file data is:\n%v\n", string(bytes))
}



The list function receives a bucket name and a folder, and list all the files within this folder. Notice that the list is recursive, which means that all the files in the sub folders are also returned. Notice that the strings array contains the keys for each file. The key is the full path to the file, starting from the bucket root, regardless of the folder used for the list API.

The list function is:



func list(s3Client *s3.S3, bucket string, folder string) []string {
params := &s3.ListObjectsInput{
Bucket: aws.String(bucket),
Prefix: aws.String(folder),
}

resp, err := s3Client.ListObjects(params)
if err != nil {
panic(err)
}

items := make([]string, 0)
for _, key := range resp.Contents {
items = append(items, *key.Key)
}

sort.Strings(items)
return items
}


Finally, let review the file read function. It reads a file from the AWS S3, and returns the file content as a bytes array. For large files, that could pose a problem to keep the entire file in the process memory, avoid using this method, and instead consider using the AWS S3 download API.



func read(s3Client *s3.S3, bucket string, file string) []byte {
getObject := &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(file),
}

result, err := s3Client.GetObject(getObject)
if err != nil {
panic(err)
}
defer result.Body.Close()
data, err := ioutil.ReadAll(result.Body)
if err != nil {
panic(err)
}

return data
}



Final Notes

In this post we've used some basic AWS S3 APIs to list files in a bucket, and to read a file content.

For buckets with more than 1K files, a pagination is used, and hence the list API should be repeatedly called and the ListObjectsInput.Marker should be used for pagination.


Thursday, September 10, 2020

Using NGINX as a Reverse Proxy for Google OAuth 2.0

 

In this post we will review using how to secure access to multiple web application using the Google OAuth 2.0. The post was created based information from several sites, each providing a part of the solution:

See also my previous posts about related issues:



Why do we need to use Google OAuth 2.0 ?


Once a project goes online, its services are accessible to all users. In case it is in the cloud as public service, its services are available to the entire world. While some of the services, such as the public faced web server, allow access to everyone, other services, such as the management services, should be secured, and allow access only to your organization admins.

The following diagram demonstrates an example of a project which includes several public facing services, and several management restricted services.




To secure the management restricted services, we want to use authentication. This means that anyone accessing these services should authenticate. 

One way to achieve this is to implement authentication in each service. However this approach has several problems. First, we need to invest time in out proprietary services, to implement authentication. Second, for the non-proprietary 3rd party services, we can only use the authentication methods that were provided by these 3rd party tools. Third, we would probably want to create a central authentication service, instead of configuring users and passwords on each service.

A better alternative is to integrate with an OAuth service, such as Google OAuth 2.0 service. As demonstrated in the diagram below, access to the management restricted services is verified by an NGINX reverse proxy that enables access only to a specific list of authenticated users. This means that Google OAuth 2.0 service handles the authentication of the users, using a user/password and any other authentication method, such as MFA. 




What is the authentication flow?


The following diagram displays the authentication flow.


The steps in the user authentication are as follows:

  1. The user accesses a management site, e.g. prometheus.mydomain.com
  2. The ingress routes the request to the NGINX reverse proxy
  3. The NGINX reverse proxy sends an auth_request to the authentication service
  4. The authentication service finds a first time incoming request without any authentication headers, and returns HTTP status 401 - unauthorized
  5. The NGINX reverse proxy redirects the user to Google OAuth 2.0 service
  6. The user logins to its account in the Google OAuth 2.0 site
  7. The Google OAuth 2.0 site redirects the user back to the management site, and adds an access code header in the query string
  8. The user accesses a management site, e.g. prometheus.mydomain.com
  9. The ingress routes the request to the NGINX reverse proxy
  10. The NGINX reverse proxy sends an auth_request to the authentication service
  11. The authentication service finds the access code header, send verification request to the Google OAuth 2.0 service
  12. The Google OAuth 2.0 service returns that the access code is valid
  13. The authentication service sends a user info request to Google service
  14. The Google service returns the user information, including the user email
  15. The authentication service finds the user email is allowed access, creates and store locally a access token to enable fast access for future requests in the same session, and returns the access token as well as HTTP status code 200
  16. The NGINX permits access to the management site 
  17. The management site returns its response
  18. The NGINX reverse proxy adds the access token as a session cookie



Create a Google OAuth 2.0 Client ID


To create a client ID, login to Google Developers Console, and create a new project in case you do not already have one. Then click on the menu hamburger on the top left, and select API & Services, and Credentials.






Click on Create Credentials, and then select OAuth 2.0 Client ID.
Select Web Application as the application type, and add the URIs that will be used to access the site.
In case using multiple backend services, each with a different domain name, you will need to to add each of the domain names. You would probably want to add another URI, such as localhost:8080 for debug purposes.






Once created, you will be notified with the client ID and client secret. Keep these values for use in the later steps.


The authentication service

I have implemented the authentication service in GO.

First we need to create an HTTP server.


package main

import (
"context"
"encoding/json"
"fmt"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"io/ioutil"
"log"
"math/rand"
"net/http"
)

var tokens = make(map[string]bool)

type UserInfo struct {
Email string `json:"email"`
}

func main() {
mux := http.NewServeMux()
mux.HandleFunc("/auth-callback", handler)

server := &http.Server{
Addr: fmt.Sprintf(":8000"),
Handler: mux,
}

log.Printf("Starting HTTP Server. Listening at %q", server.Addr)
if err := server.ListenAndServe(); err != http.ErrServerClosed {
log.Printf("%v", err)
} else {
log.Println("Server closed!")
}
}



Next add a handler for the auth request. It receives the Google Auth code from the query string, for validation vs the Google Auth API. The auth_token is used to prevent multiple access to Google Auth on the same session. It will be set as a session cookie by the NGINX reverse proxy.


func handler(response http.ResponseWriter, request *http.Request) {
code := request.URL.Query().Get("code")
redirectUri := request.URL.Query().Get("redirect_uri")
token := request.Header["auth_token"][0]
token, err := authenticate(token, code, redirectUri)
if err != nil {
response.WriteHeader(http.StatusUnauthorized)
response.Write([]byte(err.Error()))
} else {
response.Header().Set("auth_token", token)
response.Write([]byte("access allowed"))
}
}



Implement the token validation code:


func authenticate(token string, code string, redirectUri string) (string, error) {
if token != "" && tokens[token] {
// user already has a session cookie - allow access
return token, nil
}
err := accessOAuth(code, redirectUri)
if err != nil {
return "", fmt.Errorf("oauth validation failed: %v", err)
}

randomToken := randSeq(128)
tokens[randomToken] = true

return randomToken, nil
}

func randSeq(n int) string {
var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
b := make([]rune, n)
for i := range b {
b[i] = letters[rand.Intn(len(letters))]
}
return string(b)
}


And finally implement Google API usage.

I am just validating a hard coded email, but you should add your own application code. You can count on the fact the the user email is indeed as specified, but you still want to allow access only to specified list of your organization administrators.



func accessOAuth(code string, redirectUri string) error {
if code == "" {
return fmt.Errorf("empty oauth code provided, will not access oauth")
}

googleOauthConfig := oauth2.Config{
RedirectURL: redirectUri,
ClientID: "YOUR_CLIENT_ID_HERE e.g. 574647965688-634v8s2k6pnfhlto4245bna4ib5aj6o0.apps.googleusercontent.com",
ClientSecret: "YOUR_CLIENT_SECRET_HERE e.g. Wc3WH4RHlPK-j32fu1w8JciD",
Scopes: []string{"https://www.googleapis.com/auth/userinfo.email"},
Endpoint: google.Endpoint,
}

token, err := googleOauthConfig.Exchange(context.Background(), code)
if err != nil {
return fmt.Errorf("exchange failed: %v", err)
}

url := "https://www.googleapis.com/oauth2/v2/userinfo?access_token=" + token.AccessToken
response, err := http.Get(url)
if err != nil {
return fmt.Errorf("get user info failed: %v", err)
}
defer response.Body.Close()
bytes, err := ioutil.ReadAll(response.Body)
if err != nil {
return fmt.Errorf("read get user info response failed: %v", err)
}

info := UserInfo{}
err = json.Unmarshal(bytes, &info)
if err != nil {
return fmt.Errorf("unmarshal failed: %v", err)
}

// replace with your own email validation method
if info.Email == "my_allowed_email@google.com" {
return fmt.Errorf("email %v is not permitted", info.Email)
}

return nil
}



The NGINX reverse proxy


We should configure NGINX to orchestrate the authentication flow.  In this example, I've only added the prometheus.my-domain.com as a backend service, but you should duplicate this for each restricted backend service that you have.


user  nginx;
worker_processes 10;

error_log /dev/stdout debug;
pid /var/run/nginx.pid;

load_module modules/ngx_http_js_module.so;
load_module modules/ngx_stream_js_module.so;

events {
worker_connections 10240;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
resolver 8.8.8.8;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log off;
sendfile on;
port_in_redirect off;


upstream prometheus {
server prometheus-service;
keepalive 1000;
keepalive_requests 1000000;
keepalive_timeout 300s;
}

upstream auth {
server auth-service;
keepalive 1000;
keepalive_requests 1000000;
keepalive_timeout 300s;
}

server {
listen 8080;
server_name localhost;

location /health {
return 200 'NGINX is alive';
}

}

server {
listen 8080;
server_name prometheus.my-domain.com;

location = /auth {
internal;
proxy_method GET;
set $query '';
if ($request_uri ~* "[^\?]+\?(.*)$") {
set $query $1;
}
proxy_pass http://auth/auth-callback?redirect_uri=http%3A//prometheus.my-domain.com:30026&$query;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header auth_token $cookie_auth_token;
}

error_page 401 = @error401;

location @error401 {
return 302 https://accounts.google.com/o/oauth2/v2/auth?scope=https%3A//www.googleapis.com/auth/userinfo.email&response_type=code&redirect_uri=http%3A//prometheus.my-domain.com:30026&client_id=574647965688-7l5v8s2k7pnfhlto42ilbna4ib5kk6o0.apps.googleusercontent.com;
}

location / {
auth_request /auth;
auth_request_set $auth_token $upstream_http_auth_token;
add_header Set-Cookie "auth_token=$auth_token;Path=/";

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://prometheus;

proxy_set_header Connection "";
proxy_ignore_headers "Cache-Control" "Expires";
proxy_buffers 32 4m;
proxy_busy_buffers_size 25m;
proxy_buffer_size 512k;
client_max_body_size 10m;
client_body_buffer_size 4m;
proxy_connect_timeout 300;
proxy_read_timeout 300;
proxy_send_timeout 300;
proxy_intercept_errors off;
proxy_http_version 1.1;
}
}

}


We include two upstreams in the configuration: the backend service (prometheus), and the auth service.

Access the the backend service is protected using the auth_request directive, which on its turns, send a request the our authentication service. If the authentication service returns HTTP status 401, we redirect the user to Google's login page. Otherwise, we set a session cookie with the authentication service auth_token response.



Final Notes


In this post we have reviewed a complete solution for securing multiple applications using NGINX as a reverse proxy, and Google OAuth 2.0 as the user authentication authority.

In case using Ingress, you will need to configure the restricted management applications domains to be routed to the NGINX reverse proxy, instead of directly to management applications services.


Tuesday, September 1, 2020

NGINX authentication using OKTA

 

In this post we will review the steps to integrate NGINX with OKTA. Most of the work displayed here is based on the post Use nginx to Add Authentication to Any Application.

Side note: In case you want an alternative for OKTA, check my other post: Using NGINX as a Reverse Proxy for Google OAuth 2.0.


We have an NGINX that serves as a reverse proxy for a backend site. Now we want to add authentication of the users using OKTA. This would cause any new user arriving to NGINX to authenticate with OKTA before proceeding to the backend site. To implement this we use the vouch-proxy which is handling the OIDC authentication protocol.


The Requests Flow

The requests flow is listed below:








  1. The user accesses the site URL
  2. NGINX configured with auth request sends the request headers to the vouch proxy
  3. The vouch proxy does not find any access token in the headers, and returns HTTP status 401
  4. NGINX redirects the user to the vouch proxy using a login URL
  5. The vouch proxy redirects the user to the OKTA authentication service.
  6. The user logins to OKTA
  7. OKTA redirects the user back to the site URL, and adds an access token header
  8. NGINX configured with auth request sends the request headers to the vouch proxy
  9. The vouch proxy does finds the access token in the headers, and sends it to the OKTA authentication service, which returns the user details.
  10. The vouch proxy returns HTTP status 200
  11. The NGINX proxy the request to the backend site



    Implementation on Kubernetes


    To implement on kubernetes, we use the sample backend site: http://okta-headers.herokuapp.com

    We add two DNS entries
    • app1.alontest.com 
    • login.alontest.com
    As this example uses NodePort, both of these IPs point to one of the kubernetes nodes IP.
    Note that both of these DNS entries must share a common domain prefix (alontest.com) to allow cookies sharing.


    Create an OKTA Application


    To create an OKTA application, register to OKTA development site.

    Once you have completed the registration, you will get into the OKTA admin, for example, in my case:

    The OKTA base URL is shown in the OKTA dashboard:





    Click on Applications, and then Add Application of type Web.
    The application URLs should be updated with our vouch proxy URL.
    In addition, the client ID and the client secret should be copied from here to the vouch proxy configuration (see in the next sections).







    The NGINX Entities

    The NGINX is deployed in kubernetes, and includes a deployment, a service and a configMap.

    The NGINX service:


    apiVersion: v1
    kind: Service
    metadata:
    name: nginx-service
    spec:
    selector:
    app: okta
    type: NodePort
    ports:
    - port: 80
    targetPort: 8080
    name: tcp-api
    protocol: TCP
    nodePort: 30000




    The NGINX deployment:



    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: nginx-deployment
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: okta
    template:
    metadata:
    labels:
    app: okta
    spec:
    terminationGracePeriodSeconds: 1
    containers:
    - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    env:
    volumeMounts:
    - name: nginx-config
    mountPath: /etc/nginx/nginx.conf
    subPath: nginx.conf
    volumes:
    - name: nginx-config
    configMap:
    name: nginx-config


    And the NGINX configMap:



    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: nginx-config
    data:
    nginx.conf: |-
    user nginx;
    worker_processes 10;

    error_log /dev/stdout warn;
    pid /var/run/nginx.pid;

    events {
    worker_connections 10;
    }

    http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    resolver 8.8.8.8;

    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log off;
    sendfile on;
    port_in_redirect off;
    proxy_max_temp_file_size 0;

    server {
    listen 8080;
    server_name localhost;

    # Any request to this server will first be sent to this URL
    auth_request /vouch-validate;

    location = /vouch-validate {
    # This address is where Vouch will be listening on
    proxy_pass http://vouch-service:80/validate;
    proxy_pass_request_body off; # no need to send the POST body

    proxy_set_header Content-Length "";
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # these return values are passed to the @error401 call
    auth_request_set $auth_resp_jwt $upstream_http_x_vouch_jwt;
    auth_request_set $auth_resp_err $upstream_http_x_vouch_err;
    auth_request_set $auth_resp_failcount $upstream_http_x_vouch_failcount;
    }

    error_page 401 = @error401;

    # If the user is not logged in, redirect them to Vouch's login URL
    location @error401 {
    return 302 http://login.alontest.com:30001/login?url=http://$http_host$request_uri&vouch-failcount=$auth_resp_failcount&X-Vouch-Token=$auth_resp_jwt&error=$auth_resp_err;
    }

    location / {
    proxy_pass http://okta-headers.herokuapp.com;
    }
    }
    }




    Some notes for the NGINX configuration:
    • Our backend site is http://okta-headers.herokuapp.com
    • We use auth_request to validate any request with the vouch proxy
    • The location /vouch-validate is the configuration of how to access the vouch proxy
    • In case of HTTP status 401, we redirect the user (using HTTP status 302) to the vouch-proxy. Notice that we send a "url" parameter in the query string to point back to where should the user be redirected once the authentication is complete.


    The Vouch Proxy Entities

    The vouch proxy is deployed in kubernetes, and includes a deployment, a service and a configMap.

    The vouch proxy service:


    apiVersion: v1
    kind: Service
    metadata:
    name: vouch-service
    spec:
    selector:
    app: vouch
    type: NodePort
    ports:
    - port: 80
    targetPort: 9090
    name: tcp-api
    protocol: TCP
    nodePort: 30001



    The vouch proxy deployment:


    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: vouch-deployment
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: vouch
    template:
    metadata:
    labels:
    app: vouch
    spec:
    terminationGracePeriodSeconds: 1
    containers:
    - name: vouch
    image: voucher/vouch-proxy
    imagePullPolicy: IfNotPresent
    env:
    volumeMounts:
    - name: vouch-config
    mountPath: /config
    volumes:
    - name: vouch-config
    configMap:
    name: vouch-config


    And the vouch proxy configMap:


    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: vouch-config
    data:
    config.yml: |-
    vouch:
    logLevel: debug
    testing: false
    listen: 0.0.0.0 # VOUCH_LISTEN
    port: 9090 # VOUCH_PORT
    allowAllUsers: true
    jwt:
    maxAge: 240
    compress: true

    cookie:
    name: VouchCookie
    domain: alontest.com
    secure: false
    maxAge: 14400
    sameSite: lax

    session:
    name: VouchSession

    headers:
    jwt: X-Vouch-Token # VOUCH_HEADERS_JWT
    querystring: access_token # VOUCH_HEADERS_QUERYSTRING
    redirect: X-Vouch-Requested-URI # VOUCH_HEADERS_REDIRECT
    claims:
    - groups
    - given_name

    test_url: http://yourdomain.com
    post_logout_redirect_uris:
    - http://myapp.yourdomain.com/login
    - https://oauth2.googleapis.com/revoke
    - https://myorg.okta.com/oauth2/123serverid/v1/logout?post_logout_redirect_uri=http://myapp.yourdomain.com/login



    oauth:
    provider: oidc
    client_id: 0oauxxx18zEdyIR4hxxx
    client_secret: aqvxxxTHi4uzqyFCVvGWTIXFiTFxxx27IloTN0_H
    auth_url: https://dev-137493.okta.com/oauth2/default/v1/authorize
    token_url: https://dev-137493.okta.com/oauth2/default/v1/token
    user_info_url: https://dev-137493.okta.com/oauth2/default/v1/userinfo
    end_session_endpoint: https://dev-137493.okta.com/oauth2/default/v1/logout
    scopes:
    - openid
    - email
    - profile
    callback_url: http://login.alontest.com:30001/auth


    Some notes for the vouch proxy configuration:
    • Use allowUsers:true, means that once a user is authenticated in OKTA, vouch proxy would permit its access, without additional limitations
    • The cookie.domain specifies the common DNS prefix for the vouch proxy DNS and the NGINX DNS
    • The cookie.secure: false enables us to use HTTP connections (and not HTTPS)
    • The oauth.client_id and oauth.client_secret should be copied from the general section of the application that is created in OKTA admin site
    • The oauth URLs parameters use the OKTA site URL (notice not to use the admin site by mistake, like I did at the beginning...)
    • The callback URL is used to access back to the vouch proxy after the OKTA authentication is done


    Final Note

    In this post we have reviewed an NGINX and OKTA integration.

    This is a simple example, but for production, you would probably change few of the configuration, like using a registering real DNS entries, using SSL for the NGINX.