JavaScript Interoperability and Data Exchange

The boundary between Go and JavaScript is where WebAssembly applications succeed or fail. I’ve debugged countless mysterious issues that all traced back to misunderstanding how data moves between these environments. The syscall/js package provides the bridge, but using it well requires understanding both its power and its quirks.

The challenge is bridging two completely different worlds. Go has static typing and structured memory management, while JavaScript has dynamic typing and prototype-based objects. Success comes from creating clean interfaces that work naturally in both environments.

Understanding js.Value

Every JavaScript object, function, or primitive becomes a js.Value in Go. This isn’t a copy of the JavaScript data - it’s a handle that references objects living in JavaScript memory. When you call methods on js.Value, you’re sending messages across the WebAssembly boundary.

This fundamental concept shapes everything about interoperability design:

import "syscall/js"

func main() {
    // Get references to global JavaScript objects
    document := js.Global().Get("document")
    console := js.Global().Get("console")
    
    // Call JavaScript methods
    console.Call("log", "Hello from Go!")
    
    // Set properties
    document.Set("title", "My WebAssembly App")
}

Each operation here crosses the WebAssembly boundary. Understanding this helps you design efficient interfaces.

Type Conversion Basics

Go and JavaScript have different type systems, but syscall/js handles basic conversions automatically. Strings, numbers, and booleans convert seamlessly. Complex types require more thought.

// Automatic conversions work for primitives
func SetTitle(title string) {
    js.Global().Get("document").Set("title", title)
}

// Complex data needs JSON marshaling
func SendData(data map[string]interface{}) {
    jsonBytes, _ := json.Marshal(data)
    js.Global().Call("receiveData", string(jsonBytes))
}

I’ve learned to prefer JSON for complex data transfer. It’s slower than direct conversion but much more reliable and debuggable. The performance difference rarely matters compared to the debugging time saved.

Exposing Go Functions

JavaScript can call Go functions, but this requires careful memory management. Every function you expose creates a JavaScript function object that must be explicitly released to prevent memory leaks.

func main() {
    // Export a function to JavaScript
    processData := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
        if len(args) == 0 {
            return "No data provided"
        }
        
        input := args[0].String()
        result := strings.ToUpper(input)
        return result
    })
    
    // Don't forget to release when done
    defer processData.Release()
    
    // Make it available to JavaScript
    js.Global().Set("processData", processData)
    
    // Keep the program alive
    select {}
}

The defer processData.Release() is crucial. Forgetting this causes memory leaks that accumulate over time.

Event Handling Patterns

Browser events are asynchronous and can fire frequently. I handle them by creating wrapper functions that manage the complexity:

func AddClickHandler(elementId string) {
    element := js.Global().Get("document").Call("getElementById", elementId)
    
    handler := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
        // Handle the click event
        js.Global().Get("console").Call("log", "Button clicked!")
        return nil
    })
    
    element.Call("addEventListener", "click", handler)
    
    // In a real app, store handler reference for later cleanup
}

Running event handlers in goroutines can prevent blocking, but be careful about shared state access and ensure proper synchronization.

Async Operations with Promises

JavaScript’s Promise-based APIs don’t map naturally to Go’s synchronous model. I use channels to bridge this gap:

func FetchData(url string) (string, error) {
    resultChan := make(chan string, 1)
    errorChan := make(chan error, 1)
    
    success := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
        if len(args) > 0 {
            resultChan <- args[0].String()
        }
        return nil
    })
    defer success.Release()
    
    failure := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
        errorChan <- fmt.Errorf("fetch failed")
        return nil
    })
    defer failure.Release()
    
    // Call fetch and handle the promise
    promise := js.Global().Call("fetch", url)
    promise.Call("then", success).Call("catch", failure)
    
    select {
    case result := <-resultChan:
        return result, nil
    case err := <-errorChan:
        return "", err
    }
}

This pattern lets you write synchronous-looking Go code that handles JavaScript promises correctly.

Memory Management Rules

The boundary between Go and JavaScript creates unique memory management challenges. I follow these rules to avoid leaks:

  • Always call Release() on js.Func objects when done
  • Don’t store js.Value objects for long periods
  • Cache frequently-accessed global objects at startup
  • Monitor memory usage during development
type JSCache struct {
    document js.Value
    console  js.Value
}

func NewJSCache() *JSCache {
    return &JSCache{
        document: js.Global().Get("document"),
        console:  js.Global().Get("console"),
    }
}

Caching global objects avoids repeated lookups and provides a cleaner API for your application code.

Performance Optimization

Boundary crossings have measurable overhead. The most effective optimization is reducing the number of crossings by batching operations:

// Inefficient: multiple boundary crossings
func UpdateElementsSlow(ids []string, texts []string) {
    for i, id := range ids {
        element := js.Global().Get("document").Call("getElementById", id)
        element.Set("textContent", texts[i])
    }
}

// Efficient: single crossing with batched data
func UpdateElementsFast(updates map[string]string) {
    data, _ := json.Marshal(updates)
    js.Global().Call("batchUpdateElements", string(data))
}

The batched approach requires JavaScript helper functions but performs much better with large datasets. Design your APIs to minimize boundary crossings from the start.

Error Handling Strategies

JavaScript errors don’t map cleanly to Go errors. I wrap JavaScript calls to provide consistent error handling:

func SafeJSCall(obj js.Value, method string, args ...interface{}) (js.Value, error) {
    defer func() {
        if r := recover(); r != nil {
            // JavaScript exceptions become Go panics
            fmt.Printf("JavaScript error: %v\n", r)
        }
    }()
    
    if obj.IsUndefined() || obj.IsNull() {
        return js.Value{}, fmt.Errorf("object is null or undefined")
    }
    
    return obj.Call(method, args...), nil
}

This pattern catches JavaScript exceptions and converts them to Go errors, making debugging much easier.

Testing Interoperability

Testing JavaScript interoperability requires running tests in browser environments. I use build tags to separate browser-specific code:

//go:build js && wasm

func TestJSIntegration(t *testing.T) {
    // This only runs in WebAssembly environment
    console := js.Global().Get("console")
    if console.IsUndefined() {
        t.Fatal("Console object not available")
    }
    
    // Test basic functionality
    console.Call("log", "Test message")
}

The build tags ensure these tests only run in the correct environment, preventing failures in regular Go test runs.

Practical Integration Patterns

After building many WebAssembly applications, I’ve settled on patterns that work reliably across different projects. The key insights: keep the boundary clean, batch operations when possible, and always handle errors gracefully.

Most performance issues in WebAssembly applications occur at the Go-JavaScript boundary. Design your interfaces carefully, and you’ll avoid the pitfalls that plague many projects.

Next, we’ll use these interoperability foundations to manipulate the DOM and work with browser APIs directly from Go.