Backend

What is gRPC and Should You Use It Instead of REST?

What is gRPC and Should You Use It Instead of REST?

When considering API architectures, the debate often comes down to gRPC vs REST. REST has been the gold standard for APIs for over a decade—but it’s not the only game in town. In recent years, gRPC has emerged as a high-performance alternative, especially in microservices and real-time applications.

In this comprehensive guide, we’ll break down what gRPC is, how it works with complete code examples, and help you make an informed decision about when to use each approach.

What is gRPC?

gRPC (gRPC Remote Procedure Calls) is a modern open-source framework developed by Google that enables fast and efficient communication between services. Originally released in 2015, it has become the backbone of many large-scale distributed systems.

Unlike REST (which uses HTTP/1.1 + JSON), gRPC uses:

  • HTTP/2 for transport – enabling multiplexing, header compression, and server push
  • Protocol Buffers (Protobuf) for data serialization – a binary format that’s smaller and faster than JSON
  • Contract-first design via .proto files – defining services and messages before implementation

It’s designed to be fast, type-safe, and truly cross-platform with support for 11+ languages.

REST vs. gRPC: Detailed Comparison

Feature REST (JSON) gRPC (Protobuf)
Protocol HTTP/1.1 HTTP/2
Data Format JSON (text) Protobuf (binary)
Contract Implicit (OpenAPI optional) Explicit (.proto required)
Payload Size Larger (verbose) Smaller (up to 10x)
Speed Slower parsing Faster serialization
Streaming Via WebSockets Built-in (4 types)
Browser Support Native Requires gRPC-Web
Debugging Easy (human-readable) Harder (binary)
Learning Curve Low Medium-High
Code Generation Optional Required (built-in)

How gRPC Works: Complete Example

Step 1: Define Your Service in Protobuf

// proto/product_service.proto
syntax = "proto3";

package ecommerce;

option go_package = "github.com/yourorg/ecommerce/pb";

// The product service definition
service ProductService {
  // Unary RPC - single request, single response
  rpc GetProduct(GetProductRequest) returns (Product);
  
  // Server streaming - single request, stream of responses
  rpc ListProducts(ListProductsRequest) returns (stream Product);
  
  // Client streaming - stream of requests, single response
  rpc UploadProducts(stream Product) returns (UploadProductsResponse);
  
  // Bidirectional streaming - stream both ways
  rpc ProductUpdates(stream ProductUpdateRequest) returns (stream Product);
}

message GetProductRequest {
  string product_id = 1;
}

message ListProductsRequest {
  string category = 1;
  int32 page_size = 2;
  string page_token = 3;
}

message Product {
  string id = 1;
  string name = 2;
  string description = 3;
  double price = 4;
  string category = 5;
  int32 stock_quantity = 6;
  repeated string image_urls = 7;
  ProductStatus status = 8;
  google.protobuf.Timestamp created_at = 9;
}

enum ProductStatus {
  PRODUCT_STATUS_UNSPECIFIED = 0;
  PRODUCT_STATUS_ACTIVE = 1;
  PRODUCT_STATUS_OUT_OF_STOCK = 2;
  PRODUCT_STATUS_DISCONTINUED = 3;
}

message UploadProductsResponse {
  int32 products_created = 1;
  repeated string product_ids = 2;
}

message ProductUpdateRequest {
  string product_id = 1;
  oneof update {
    double new_price = 2;
    int32 stock_adjustment = 3;
    ProductStatus new_status = 4;
  }
}

Step 2: Generate Code

# Install protoc compiler and plugins
# For Go:
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest

# Generate Go code
protoc --go_out=. --go_opt=paths=source_relative \
       --go-grpc_out=. --go-grpc_opt=paths=source_relative \
       proto/product_service.proto

# For Node.js:
npm install @grpc/grpc-js @grpc/proto-loader

# For Python:
pip install grpcio grpcio-tools
python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. proto/product_service.proto

Step 3: Implement the gRPC Server (Go)

// server/main.go
package main

import (
	"context"
	"io"
	"log"
	"net"
	"sync"
	"time"

	"google.golang.org/grpc"
	"google.golang.org/grpc/codes"
	"google.golang.org/grpc/status"
	pb "github.com/yourorg/ecommerce/pb"
)

type productServer struct {
	pb.UnimplementedProductServiceServer
	mu       sync.RWMutex
	products map[string]*pb.Product
}

func NewProductServer() *productServer {
	return &productServer{
		products: make(map[string]*pb.Product),
	}
}

// Unary RPC: Get a single product
func (s *productServer) GetProduct(
	ctx context.Context,
	req *pb.GetProductRequest,
) (*pb.Product, error) {
	if req.ProductId == "" {
		return nil, status.Error(codes.InvalidArgument, "product_id is required")
	}

	s.mu.RLock()
	defer s.mu.RUnlock()

	product, exists := s.products[req.ProductId]
	if !exists {
		return nil, status.Errorf(
			codes.NotFound,
			"product %s not found",
			req.ProductId,
		)
	}

	return product, nil
}

// Server streaming: List products with streaming response
func (s *productServer) ListProducts(
	req *pb.ListProductsRequest,
	stream pb.ProductService_ListProductsServer,
) error {
	s.mu.RLock()
	defer s.mu.RUnlock()

	for _, product := range s.products {
		// Filter by category if specified
		if req.Category != "" && product.Category != req.Category {
			continue
		}

		// Send each product through the stream
		if err := stream.Send(product); err != nil {
			return status.Errorf(codes.Internal, "failed to send product: %v", err)
		}

		// Simulate some processing time
		time.Sleep(100 * time.Millisecond)
	}

	return nil
}

// Client streaming: Upload multiple products
func (s *productServer) UploadProducts(
	stream pb.ProductService_UploadProductsServer,
) error {
	var productIds []string
	count := 0

	for {
		product, err := stream.Recv()
		if err == io.EOF {
			// Client finished sending
			return stream.SendAndClose(&pb.UploadProductsResponse{
				ProductsCreated: int32(count),
				ProductIds:      productIds,
			})
		}
		if err != nil {
			return status.Errorf(codes.Internal, "failed to receive: %v", err)
		}

		// Store the product
		s.mu.Lock()
		s.products[product.Id] = product
		s.mu.Unlock()

		productIds = append(productIds, product.Id)
		count++
	}
}

// Bidirectional streaming: Real-time product updates
func (s *productServer) ProductUpdates(
	stream pb.ProductService_ProductUpdatesServer,
) error {
	for {
		req, err := stream.Recv()
		if err == io.EOF {
			return nil
		}
		if err != nil {
			return err
		}

		s.mu.Lock()
		product, exists := s.products[req.ProductId]
		if !exists {
			s.mu.Unlock()
			continue
		}

		// Apply the update
		switch update := req.Update.(type) {
		case *pb.ProductUpdateRequest_NewPrice:
			product.Price = update.NewPrice
		case *pb.ProductUpdateRequest_StockAdjustment:
			product.StockQuantity += update.StockAdjustment
		case *pb.ProductUpdateRequest_NewStatus:
			product.Status = update.NewStatus
		}
		s.mu.Unlock()

		// Send updated product back
		if err := stream.Send(product); err != nil {
			return err
		}
	}
}

func main() {
	lis, err := net.Listen("tcp", ":50051")
	if err != nil {
		log.Fatalf("failed to listen: %v", err)
	}

	// Create gRPC server with interceptors
	server := grpc.NewServer(
		grpc.UnaryInterceptor(loggingUnaryInterceptor),
		grpc.StreamInterceptor(loggingStreamInterceptor),
	)

	pb.RegisterProductServiceServer(server, NewProductServer())

	log.Println("gRPC server listening on :50051")
	if err := server.Serve(lis); err != nil {
		log.Fatalf("failed to serve: %v", err)
	}
}

// Interceptor for logging (middleware pattern)
func loggingUnaryInterceptor(
	ctx context.Context,
	req interface{},
	info *grpc.UnaryServerInfo,
	handler grpc.UnaryHandler,
) (interface{}, error) {
	start := time.Now()
	resp, err := handler(ctx, req)
	log.Printf(
		"method=%s duration=%v error=%v",
		info.FullMethod,
		time.Since(start),
		err,
	)
	return resp, err
}

Step 4: Implement the gRPC Client (Node.js)

// client/index.js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const path = require('path');

// Load protobuf definition
const PROTO_PATH = path.join(__dirname, '../proto/product_service.proto');
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
  keepCase: true,
  longs: String,
  enums: String,
  defaults: true,
  oneofs: true,
});

const ecommerce = grpc.loadPackageDefinition(packageDefinition).ecommerce;

// Create client
const client = new ecommerce.ProductService(
  'localhost:50051',
  grpc.credentials.createInsecure()
);

// Unary call example
async function getProduct(productId) {
  return new Promise((resolve, reject) => {
    client.GetProduct({ product_id: productId }, (error, product) => {
      if (error) {
        reject(error);
      } else {
        resolve(product);
      }
    });
  });
}

// Server streaming example
async function listProducts(category) {
  const products = [];
  
  return new Promise((resolve, reject) => {
    const call = client.ListProducts({ category, page_size: 10 });
    
    call.on('data', (product) => {
      console.log('Received product:', product.name);
      products.push(product);
    });
    
    call.on('end', () => {
      resolve(products);
    });
    
    call.on('error', (error) => {
      reject(error);
    });
  });
}

// Client streaming example
async function uploadProducts(products) {
  return new Promise((resolve, reject) => {
    const call = client.UploadProducts((error, response) => {
      if (error) {
        reject(error);
      } else {
        resolve(response);
      }
    });
    
    // Send each product
    for (const product of products) {
      call.write(product);
    }
    
    // Signal we're done sending
    call.end();
  });
}

// Bidirectional streaming example
async function productUpdates(updates) {
  const results = [];
  
  return new Promise((resolve, reject) => {
    const call = client.ProductUpdates();
    
    call.on('data', (product) => {
      console.log('Updated product:', product);
      results.push(product);
    });
    
    call.on('end', () => {
      resolve(results);
    });
    
    call.on('error', reject);
    
    // Send updates
    for (const update of updates) {
      call.write(update);
    }
    
    call.end();
  });
}

// Usage
(async () => {
  try {
    // Get a single product
    const product = await getProduct('prod-123');
    console.log('Product:', product);
    
    // Stream all electronics
    const electronics = await listProducts('electronics');
    console.log(`Found ${electronics.length} electronics`);
    
    // Upload multiple products
    const uploadResult = await uploadProducts([
      { id: 'prod-1', name: 'Widget', price: 9.99, category: 'gadgets' },
      { id: 'prod-2', name: 'Gizmo', price: 19.99, category: 'gadgets' },
    ]);
    console.log('Upload result:', uploadResult);
    
  } catch (error) {
    console.error('Error:', error.message);
  }
})();

Equivalent REST Implementation for Comparison

// Express REST API for comparison
const express = require('express');
const app = express();
app.use(express.json());

const products = new Map();

// GET /products/:id - Unary equivalent
app.get('/products/:id', (req, res) => {
  const product = products.get(req.params.id);
  if (!product) {
    return res.status(404).json({ error: 'Product not found' });
  }
  res.json(product);
});

// GET /products - List with pagination (no streaming)
app.get('/products', (req, res) => {
  const { category, page = 1, limit = 10 } = req.query;
  let result = Array.from(products.values());
  
  if (category) {
    result = result.filter(p => p.category === category);
  }
  
  // Pagination
  const start = (page - 1) * limit;
  const paginated = result.slice(start, start + limit);
  
  res.json({
    products: paginated,
    total: result.length,
    page,
    pages: Math.ceil(result.length / limit),
  });
});

// POST /products/batch - Batch create (no streaming)
app.post('/products/batch', (req, res) => {
  const { products: newProducts } = req.body;
  const ids = [];
  
  for (const product of newProducts) {
    products.set(product.id, product);
    ids.push(product.id);
  }
  
  res.status(201).json({
    products_created: ids.length,
    product_ids: ids,
  });
});

// No easy equivalent for bidirectional streaming in REST
// Would need WebSockets for real-time updates

app.listen(3000, () => console.log('REST API on :3000'));

Understanding gRPC Streaming Types

Type Request Response Use Case
Unary Single Single Standard request/response (like REST)
Server Streaming Single Stream Real-time feeds, large result sets
Client Streaming Stream Single File uploads, batch operations
Bidirectional Stream Stream Chat, gaming, live collaboration

gRPC in the Browser with gRPC-Web

// Using gRPC-Web with Envoy proxy for browser support
// Client-side TypeScript with grpc-web

import { ProductServiceClient } from './generated/product_service_grpc_web_pb';
import { GetProductRequest } from './generated/product_service_pb';

const client = new ProductServiceClient(
  'http://localhost:8080', // Envoy proxy
  null,
  null
);

async function getProduct(productId: string): Promise<Product> {
  const request = new GetProductRequest();
  request.setProductId(productId);
  
  return new Promise((resolve, reject) => {
    client.getProduct(request, {}, (err, response) => {
      if (err) {
        reject(err);
      } else {
        resolve(response.toObject());
      }
    });
  });
}

// Server streaming works with gRPC-Web
function listProducts(category: string): void {
  const request = new ListProductsRequest();
  request.setCategory(category);
  
  const stream = client.listProducts(request, {});
  
  stream.on('data', (product) => {
    console.log('Product:', product.toObject());
  });
  
  stream.on('end', () => {
    console.log('Stream ended');
  });
  
  stream.on('error', (err) => {
    console.error('Stream error:', err);
  });
}
# Envoy proxy configuration for gRPC-Web
# envoy.yaml
static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: 0.0.0.0, port_value: 8080 }
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                codec_type: auto
                stat_prefix: ingress_http
                route_config:
                  name: local_route
                  virtual_hosts:
                    - name: local_service
                      domains: ["*"]
                      routes:
                        - match: { prefix: "/" }
                          route:
                            cluster: grpc_service
                            timeout: 0s
                            max_stream_duration:
                              grpc_timeout_header_max: 0s
                      cors:
                        allow_origin_string_match:
                          - prefix: "*"
                        allow_methods: GET, PUT, DELETE, POST, OPTIONS
                        allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
                        expose_headers: grpc-status,grpc-message
                http_filters:
                  - name: envoy.filters.http.grpc_web
                  - name: envoy.filters.http.cors
                  - name: envoy.filters.http.router

  clusters:
    - name: grpc_service
      connect_timeout: 0.25s
      type: logical_dns
      http2_protocol_options: {}
      lb_policy: round_robin
      load_assignment:
        cluster_name: grpc_service
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: host.docker.internal
                      port_value: 50051

Performance Comparison

// Benchmark results (typical scenarios)
// Testing with 1000 requests, 10 concurrent connections

/*
┌─────────────────────────────────────────────────────────┐
│ Operation          │ REST (JSON)   │ gRPC (Protobuf)  │
├────────────────────┼───────────────┼──────────────────┤
│ Payload size       │ 1.2 KB        │ 0.15 KB          │
│ Serialization      │ 1.2 ms        │ 0.3 ms           │
│ Deserialization    │ 0.8 ms        │ 0.2 ms           │
│ Latency (p50)      │ 12 ms         │ 4 ms             │
│ Latency (p99)      │ 45 ms         │ 15 ms            │
│ Throughput         │ 2,500 req/s   │ 8,000 req/s      │
│ Connection reuse   │ Keep-Alive    │ Multiplexed      │
└─────────────────────────────────────────────────────────┘

Key factors for gRPC's performance advantage:
1. Binary serialization (Protobuf) vs text (JSON)
2. HTTP/2 multiplexing vs HTTP/1.1 connection overhead
3. Header compression (HPACK)
4. Efficient streaming without reconnection
*/

When to Use gRPC vs REST

Use gRPC When:

  • Internal microservices – Services communicating frequently within your infrastructure
  • Real-time applications – Video streaming, chat, gaming, live dashboards
  • High-performance requirements – When every millisecond matters
  • Polyglot environments – Teams using different languages that need type-safe contracts
  • Mobile apps – Battery and bandwidth efficiency from smaller payloads
  • IoT devices – Constrained environments where payload size matters

Stick with REST When:

  • Public APIs – External developers expect REST’s simplicity
  • Browser-first applications – Avoid gRPC-Web complexity if not needed
  • Simple CRUD operations – REST’s resource model maps naturally
  • Quick prototypes – Lower setup time, easier debugging
  • Caching requirements – HTTP caching works out of the box with REST
  • Limited infrastructure – No need for Envoy proxies or special tooling
Use Case gRPC REST Recommendation
Mobile ↔ Backend Yes Yes gRPC for performance, REST for simplicity
Browser ↔ Backend Maybe Yes REST unless you need streaming
Service ↔ Service Yes Maybe gRPC for internal, REST for third-party
Public API No Yes REST for accessibility
Real-time data Yes No gRPC streaming
File uploads Yes Yes Both work, gRPC has client streaming

The Hybrid Approach: Best of Both Worlds

// Many organizations use both:
// - gRPC for internal service-to-service communication
// - REST (or GraphQL) for external APIs

/*
┌──────────────────────────────────────────────────────────┐
│                    External Clients                      │
│         (Browsers, Third-party apps, Mobile)            │
└────────────────────────┬─────────────────────────────────┘
                         │ REST/GraphQL
                         ▼
┌──────────────────────────────────────────────────────────┐
│                    API Gateway                           │
│             (Kong, AWS API Gateway)                      │
└────────────────────────┬─────────────────────────────────┘
                         │ gRPC
         ┌───────────────┼───────────────┐
         ▼               ▼               ▼
┌─────────────┐  ┌─────────────┐  ┌─────────────┐
│  Service A  │──│  Service B  │──│  Service C  │
│   (gRPC)    │  │   (gRPC)    │  │   (gRPC)    │
└─────────────┘  └─────────────┘  └─────────────┘
*/

Common Mistakes to Avoid

  1. Using gRPC for browser-first apps without planning – gRPC-Web adds complexity; ensure it’s worth it
  2. Not versioning your proto files – Use semantic versioning and maintain backward compatibility
  3. Ignoring error handling – Use proper gRPC status codes, not just generic errors
  4. Skipping deadlines/timeouts – Always set context deadlines to prevent resource exhaustion
  5. Large messages – gRPC has default 4MB limit; use streaming for large payloads
  6. Not using interceptors – They’re essential for logging, auth, metrics
  7. Assuming REST knowledge transfers – gRPC has different patterns; invest in learning

Essential Tools

  • grpcurl – CLI tool for testing gRPC services (like curl for REST)
  • Postman – Now supports gRPC testing
  • BloomRPC – GUI client for testing gRPC services
  • Buf – Modern tooling for managing .proto files
  • Envoy – Proxy for gRPC-Web and load balancing
  • grpc-gateway – Generate REST API from gRPC service definitions

Conclusion

gRPC isn’t here to replace REST—but it offers serious advantages in the right context. If you need high-speed, strongly typed, contract-first APIs with streaming support, gRPC is worth investing in.

For frontend-heavy, public-facing APIs, REST still reigns for its simplicity and universal support. In 2025, a hybrid approach using gRPC for internal systems and REST for external consumers is often the smartest strategy.

Key takeaways:

  • gRPC excels in microservices, real-time apps, and performance-critical systems
  • REST remains ideal for public APIs and browser-first applications
  • Protobuf contracts provide strong typing and code generation
  • HTTP/2 features like multiplexing give gRPC a performance edge
  • Both can coexist – use each where it makes sense

For building REST APIs, see our guide on Building REST APIs with FastAPI. For a comparison with GraphQL, check out REST vs GraphQL vs gRPC. For official documentation, visit gRPC.io.

Leave a Comment