You've successfully subscribed to Go Beyond
Great! Next, complete checkout for full access to Go Beyond
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.

Go Walkthrough: encoding/json

Ben Johnson
Ben Johnson

For better or for worse, JSON is the encoding of the Internet. Its formal definition is small enough that you could write it on the back of a napkin but yet it can encode strings, numbers, booleans, nulls, maps, and arrays. Because of this simplicity, every language has a JSON parser.

Go’s implementation is a package called encoding/json and it allows us to seamlessly add JSON encoding to our Go objects. However, with its extensive use of reflection, encoding/json is one of the least understood packages even though it’s also one of the most used. We’ll take a deep look at how this package works — not just as a user of the package — but also look at how its internals function.

This post is part of a series of walkthroughs to help you understand the standard library better. While generated documentation provides a wealth of information, it can be difficult to understand packages in a real world context. This series aims to provide context of how standard library packages are used in every day applications. If you have questions or comments you can reach me at @benbjohnson on Twitter.

What is JSON?

JSON stands for JavaScript Object Notation and it is exactly that — the subset of the JavaScript language that defines object literals. JavaScript lacks static, declared typing so language literals needed to have implicit types. Strings are wrapped in double quotes, arrays are wrapped in brackets, and maps are wrapped in curly braces:

{"name": "mary", "friends: ["stu", "becky"], age: 30}

While this loose type information is cursed by many developers writing JavaScript, it provides an extremely easy and concise way of representing data.

Tradeoffs of using JSON

Although JSON is easy to start using, you can run into some issues. Formats which are easy for humans to read are typically slow for computers to parse. For example, running the encoding/json benchmarks on my MacBook Pro show encoding and decoding speed at 100 MB/sec and 27 MB/sec, respectively:

$ go test -bench=. encoding/json
BenchmarkCodeEncoder-4     106.26 MB/s
BenchmarkCodeDecoder-4     27.76 MB/s

A binary decoder, however, can typically parse data at many times that speed. This problem occurs because of how data is read. A JSON number literal such as “123.45” has to be decoded in two iterative steps:

  1. Read each byte to inspect if it is a number or a dot. If a non-number is read then the we are done scanning the number literal.
  2. Convert the base-10 number literal into a base-2 format such as int64 or IEEE-754 floating point number representation.

This involves a lot of parsing for every incoming byte as well as a lookahead buffer on the decoder. In contrast, a binary decoder simply needs to know how many bytes to read (e.g. 2, 4, or 8) and possibly flip the endianness. These binary parsing operations also involve no branching which slows down CPU pipelining.

When should you use JSON?

Typically JSON is used when ease-of-use is the primary goal of data interchange and performance is a low priority. Because JSON is human-readable, it is easy to debug if something breaks. Binary protocols, on the other hand, have to be decoded first before they can be analyzed.

In many applications, encoding/decoding performance is a low priority because it can easily be scaled horizontally. For example, adding additional servers to serve API endpoints is usually trivial because encoding requires no coordination with other servers. Your database, however, likely doesn’t scale as easily once you need to add servers.

Encoding streams

The json package has two ways to encode values to JSON. First is the stream-based json.Encoder which will encode the value to an io.Writer:

type Encoder struct {}

func NewEncoder(w io.Writer) *Encoder

func (enc *Encoder) Encode(v interface{}) error

The second option is json.Marshal() which will return an in-memory byte slice of your encoded value:

func Marshal(v interface{}) ([]byte, error)

When you pass a value into these encoders, the underlying JSON library goes through a complex process of inspecting type definitions, compiling encoders, and recursively processing values in your data. Let’s look at each of these in detail.

Type inspection

When you pass a value into the encoder, the first step is to look up the value’s type encoder. Types are inspected using Go’s reflect package and the json package holds an internal mapping of these reflect.Type values. For built-in types such as int, string, map, struct, and slice, there are hardcoded implementations in the json package. These are fairly simple — the stringEncoder wraps string values in double quotes and escapes characters as needed, the intEncoder converts integers to a string format, etc.

Note: the use of the reflect library in Go is a touchy subject. On one hand it makes generic runtime encoders such as encoding/json possible and on the other hand it can be abused by developers who use it instead of using static type checking constructs. I find the use of reflect at the application level to generally be a poor choice.

Encoder compilation

For types that are not built-in, an encoder is built on the fly and then cached for reuse. First, the encoder will check if the type implements json.Marshaler:

type Marshaler interface {
	MarshalJSON() ([]byte, error)
}

If it does then the marshaling is deferred to the type. This is really useful if one of your types has a special JSON representation and shouldn’t be handled by the json package’s reflection-based encoder.

Next, the encoder checks if a type implements encoding.TextMarshaler:

type TextMarshaler interface {
	MarshalText() (text []byte, err error)
}

If it does then it will generate a value from that function and then encode the result as a JSON string. You see this all the time when using time.Time. Because time.Time has a MarshalText() method, the JSON encoder will automatically encode a time.Time value as a RFC 3339 formatted string.

Finally, if neither interface is implemented then it will recursively build an encoder based on the primitive encoders. For example, a type which consists of a struct which contains an int field and a string field will generate a structEncoder which has an intEncoder and stringEncoder inside. Again, building this encoder only happens once and the resulting encoder will be cached for future use.

Per-field options

One important note about the struct encoder is that it reads field tags to determine per-field options for encoding. Tags are the backticked strings you sometimes see at the end of structs.

For example:

type User struct {
	Name    string `json:"name"`
	Age     int    `json:"age,omitempty"`
	Zipcode int    `json:"zipcode,string"`
}

These options include:

  • Renaming the field’s key. A lot of JSON keys are camel cased so it can be important to change the name to match.
  • The omitempty flag can be set which will remove any non-struct fields which have an empty value.
  • The string flag can be used to force a field to encode as a string. For example, forcing an int field to be encoded as a quoted string.

Recursive processing

Finally, when encoding is performed it is written to an internal buffer called encodeState. This object is handed off to each encoder your value needs so that the encoder can append its bytes. For the json.Marshal() call, a reference to this buffer’s bytes is returned.

When using a json.Encoder, a sync.Pool is used internally to reuse these encodeState buffers. That helps minimize the number of heap allocations required by the encoder so for stream processing always use a json.Encoder.

Decoding streams

Converting JSON-encoded bytes back into objects is sort of like reversing the process of encoding but with some important differences.

There are two ways to decode JSON from bytes. First is the stream-based json.Decoder which allows you decode from an io.Reader:

type Decoder struct {}

func NewDecoder(r io.Reader) *Decoder

func (dec *Decoder) Decode(v interface{}) error

Alternatively you can decode from a byte slice by using the json.Unmarshal() function:

func Unmarshal(data []byte, v interface{}) error

These decoders work in two parts: first a scanner tokenizes the input bytes and then a decodeState converts the tokens to Go objects.

Scanning JSON

The scanner is an internal state machine used for parsing JSON. It operates in several steps. First, it checks the first byte of a value to determine the type of token to parse. If it is a “{“ then it needs to parse an object, if it’s a “[“ then it needs to parse an array. This works for simple values too. Double quotes indicate the start of a string, a “t” or an “f” indicate the start of a boolean value, and a 0–9 indicate the beginning of a number.

Once it determines the type of scanning to be done, it hands off to a type-specific function — e.g. string scan, number scan, etc. For complex objects such as maps and arrays, a stack is used to keep track of closing braces.

Lookahead buffers

An interesting aside of scanning is the lookahead buffer. JSON is “LL(1) parseable” which means that it requires only a single byte buffer while scanning. This buffer is used to peek ahead at the next byte.

For example, the number scanning function will continue to read bytes until it finds a non-numeric character. However, since the character is already read from the stream we need to push it back on a buffer for the next scanning function to use. This is what the lookahead buffer is for.

If you’re interested in learning more about writing parsers, I wrote a Gopher Academy post called Handwriting Parsers & Lexers in Go.

Decoding tokens

Once tokens are scanned they need to be interpreted. This is the job of the decodeState. In this phase the input value to be decoded into will be matched against each token to be processed.

For example, if you pass in a struct type then the decoder will expect to see a “{“ token. Any other token will cause the decoding to return an error. This phase of matching tokens to values involves heavy use of the reflect package, however, these decoders are not cached so reflection has to be redone on every decode.

You can also process tokens as a stream using the Decoder.Token() and Decoder.More() methods. Admittedly, I haven’t used these methods personally but it’s good to know they are available.

Custom unmarshaling

Just like the encoding, you can specify custom implementations during decoding. The decoder will first check if a type implements json.Unmarshaler:

type Unmarshaler interface {
	UnmarshalJSON([]byte) error
}

This allows your type to receive the entire JSON value for a type and parse it itself. This can be useful if you want to write your own optimize implementation.

Next the decoder checks if the value type implements encoding.TextUnmarshaler:

type TextUnmarshaler interface {
	UnmarshalText(text []byte) error
}

This is useful if you have a string representation of your type that you want to use. One example of this is for enum types that are represented as integers internally but you want to encode/decode them as strings.

Deferred processing

An alternative to json.Unmarshaler is the json.RawMessage type. With RawMessage, the raw JSON representation will be saved to a field that you can process after unmarshaling is complete. This can be used if you need to interpret a “type” field in your JSON object and then change JSON parsing based on the value.

type T struct {
	Type  string          `json:"type"`
	Value json.RawMessage `json:"value"`
}

func (t *T) Val() (interface{}, error) {
	switch t.Type {
	case "foo":
		// parse "t.Value" as Foo
	case "bar":
		// parse "t.Value" as Bar
	default:
		return nil, errors.New("invalid type")
	}
}

I personally find json.Unmarshaler more useful though because I don’t like to save off JSON for later interpretation.

Another way to defer processing is with JSON numbers. Because JSON doesn’t distinguish between integers and floats, the decoder will convert numbers into float64 when decoding into an interface{} field. To defer the parsing you can use the json.Number type instead.

type T struct {
	Value json.Number
}

...

if strings.Contains(t.Value, ".") {
	v, err := t.Value.Float64()
	// process as a float
} else {
	v, err := t.Value.Int64()
	// process as an integer
}

I don’t use json.Number too often since I usually use static types during decoding.

Pretty printing

JSON is typically written out as one long set of bytes with no extra whitespace, however, that’s hard to read. You can set indentation in two ways. If you have an in-memory JSON-encoded byte slice then you can pass it to the json.Indent() function:

func Indent(dst *bytes.Buffer, src []byte, prefix, indent string) error

The prefix argument specifies a character to write out on every line and the indent specifies the characters used for indentation. I don’t typically use the prefix but I’ll usually use a two-space or tab indent value.

There’s a helper function called json.MarshalIndent() which literally just calls json.Marshal() and then json.Indent().

If you are using the stream-based json.Encoder then you can set the indentation on it using the SetIndent() method:

func (enc *Encoder) SetIndent(prefix, indent string)

I find that many people don’t know about SetIndent() and will marshal & indent to a byte slice and then write that result to a stream.

The reverse of the indent functions is the Compact() function:

func Compact(dst *bytes.Buffer, src []byte) error

This will rewrite src to a destination buffer but remove all extraneous whitespace.

Handling errors during encoding/decoding

The json package has quite a few error types within it. Here’s a list of what can go wrong while you’re encoding or decoding:

  • If you pass in a non-pointer value to be decoded then you are actually passing a copy of your value and the decoder can’t decode into the original value. The decoder catches this and returns an InvalidUnmarshalError.
  • If your data contains an invalid JSON then a SyntaxError will be returned with the byte position of the invalid character.
  • If an error is returned by a json.Marshaler or encoding.TextMarshaler then it will be wrapped in a MarshalerError.
  • If a token cannot be unmarshaled into a corresponding value then an UnmarshalTypeError is returned.
  • The float values of Infinity and NaN are not representable in JSON and will return an UnsupportedValueError.
  • Types which cannot be represented in JSON (such as functions, complex numbers, pointers, etc) will return an UnsupportedTypeError.
  • Before Go 1.2, invalid UTF-8 characters would cause an InvalidUTF8Error to be returned. Later versions simply convert invalid characters to U+FFFD which is the Unicode character for an “unknown character”.

While this might seem like a lot of errors, there’s not much you can do to handle them in your code other than log an error and have a human operator intervene. Also, many of them can be caught at development-time if you have unit test coverage.

Alternative implementations

Several years ago I implemented a tool called megajson which generated type-specific encoders and decoders at compile time in order to avoid reflection entirely. This made encoding and decoding much faster. However, the tool was a proof of concept, had limited support, and was eventually abandoned.

Luckily, Paul Querna made an implementation called ffjson which does the same thing but does it much better. If you need to improve your JSON encoding and decoding performance then I highly suggest looking at his implementation.

Conclusion

JSON can be a great data format when you need to get up and running quickly or you need to provide a simple API to users. Go’s implementation provides a lot of features to make it simple to use through the use of reflection.

We’ve looked at the internals of the encoding and decoding side of JSON as well as seen how we can format our JSON representation. These tools may seem simple from the outside but there’s a lot of fascinating stuff happening internally to make them as fast and efficient as possible.

Go Walkthrough

Ben Johnson

Freelance Go developer, author of BoltDB