Connecting to Cloud SQL from Cloud Run

Cloud Run was recently released as a new serverless offering on Google Cloud Platform based on Knative. However, what struck me instantly was no support for Cloud SQL. Because most of our services are running SQL databases on Cloud SQL, I started searching for ways on how to connect to it until we have a proper solution from Google. We’ll build a simple Golang application with a single endpoint to demonstrate how to connect, however, the approach is language-agnostic. Cloud SQL proxy will also be used to run alongside the application to securely connect to the database.

I came to an interesting solution, however, beware, that because there is no secret management in managed Cloud Run yet, we’ll use embedded service accounts, which are not recommended in production use.

Creating the Application

The application is a Golang server with a single endpoint on its root /. It will always return the first found user in our connected database. This user (and its table if needed) is created on the application startup, however, you can add more endpoints if you want to (e.g. adding a new user). The server uses GORM as an ORM for simplicity.

// User is a model for the user entity
type User struct {
gorm.Model

Name string
}

// createGetUserHandler returns a user handler function that
// returns the first user's name from the DB to the http caller
func createGetUserHandler(db *gorm.DB) http.HandlerFunc {
return func(rw http.ResponseWriter, req *http.Request) {
var user User
db.First(&user)

// write user name in the response
rw.Write([]byte(user.Name))
}
}

func main() {
// get our OS variables
user := os.Getenv("DB_USER")
pass := os.Getenv("DB_PASS")

// connect to the DB
db, err := gorm.Open(
"postgres",
fmt.Sprintf("host=127.0.0.1 port=5432 user=%s password=%s sslmode=disable", user, pass),
)
if err != nil {
panic(err)
}

db.AutoMigrate(&User{}) // create tables

// create dummy user
db.Create(&User{
Name: "Peter",
})

http.HandleFunc("/", createGetUserHandler(db))
http.ListenAndServe(":" + os.Getenv("PORT"), nil)
}

There are 3 main points to be aware of in the sample above:

  • We are getting DB_USER and DB_PASS variables from the environment, which must be provided to the application when deploying using Cloud Run.
  • Connections are made to 127.0.0.1:5432 because we will be running SQL proxy alongside the application in the single container.
  • The PORT variable is also provided by the environment, however, this variable is populated by the Cloud Run itself.

Building the Application

The application is built using Dockerfile while being pushed into gcr.io repositories afterward. It uses golang:1.12 to build the application but copies only built binaries in the second step to build the minimal image (~ 15MB). Step 5 and 6 are used to download the SQL proxy binary so we can bake it into the minimal image.

FROM golang:1.12

WORKDIR /app
COPY . .

# build the Go binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o /build/server .

# download the cloudsql proxy binary
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O /build/cloud_sql_proxy
RUN chmod +x /build/cloud_sql_proxy

# copy the wrapper script and credentials
COPY run.sh /build/run.sh
COPY credentials.json /build/credentials.json

#
# -- build minimal image --
#
FROM alpine:latest

WORKDIR /root

# add certificates
RUN apk --no-cache add ca-certificates

# copy everything from our build folder
COPY --from=0 /build .

CMD ["./run.sh"]

You can also notice two files that were not defined yet:

  • run.sh that is used to wrap two processes (the server and the cloud_sql_proxy binaries) and run them together in the same container. We are also introducing two new env variables CLOUDSQL_INSTANCE and CLOUDSQL_CREDENTIALS that will be defined during our deployment phase. These are the contents of run.sh :
#!/bin/sh

# Start the proxy
./cloud_sql_proxy -instances=$CLOUDSQL_INSTANCE=tcp:5432 -credential_file=$CLOUDSQL_CREDENTIALS &

# wait for the proxy to spin up
sleep 10

# Start the server
./server
  • The credentials.json is used to provide access to the database via service account. This can be obtained from the IAM tab in the GCP console by navigating IAM > Service Accounts and creating one with a JSON key. Don’t forget to add the Cloud SQL Client role to this service account in the IAM tab.

Credentials.json Alternative: Thanks to Gabriela D'Ávila Ferrara (gabi.dev) from comments — You can also skip adding credentials.json and instead, add CloudSQL Client role to your Cloud Run service account (email ending with @serverless-robot-prod.iam.gserviceaccount.com ) which will give access to all your Cloud Run services.

Deploying

Now that we are done with the configuration, the last thing we need is to deploy the application. We can build the application using:

docker build -t gcr.io/$PROJECT_ID/my-server .

Then push it using Docker, so Cloud Run will be able to find it in our image registry:

docker push -t gcr.io/$PROJECT_ID/my-server

And the last step — creating a Cloud Run deployment. Don’t forget to populate environment variables that are used in both, the server and the proxy configurations:

You should see a public link to the application shortly after the deploying — right in the Cloud Run service detail UI.

Conclusion

That’s it! We just deployed a Cloud Run application that runs with the Cloud SQL. Even though it’s not as easy as checking one checkbox, I believe it still does its job if you want to experiment with the Cloud Run services using Cloud SQL before the official support comes out.

If you have any questions, don’t hesitate to ask in comments :)

I like lasagnas

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store