How to Add Bot Detection
to Ruby on Rails with Middleware
Ruby on Rails has Rack middleware — a pipeline that processes every HTTP request before it reaches your controllers. That makes it the ideal place to add bot detection: block malicious IPs, rate-limit suspicious traffic, and enrich requests with risk data before your application logic runs.
This guide walks you through integrating IPASIS bot detection into a Rails application, from a basic Rack middleware to production-ready patterns with Redis caching, controller-level checks, and Active Job background scoring.
What you'll build:
- ✅ Rack middleware that checks IP risk on every request
- ✅ Redis caching to minimize API calls and latency
- ✅ Controller concerns for route-specific protection
- ✅ before_action guards for signup, login, checkout, and API endpoints
- ✅ Active Job background scoring for non-blocking risk assessment
- ✅ Graceful degradation when the API is unreachable
- ✅ Structured logging and monitoring with Rails instrumentation
Prerequisites
- Ruby on Rails 7.0+ (Rails 7.1+ recommended)
- An IPASIS API key — get one free (1,000 lookups/day, no credit card)
- Redis (recommended for caching, but works without it)
- Basic familiarity with Rack middleware and Rails concerns
Step 1: Basic Rack Middleware
Create a Rack middleware that intercepts requests and checks the client IP against IPASIS. This runs before any controller logic, so high-risk traffic never even reaches your app.
app/middleware/bot_detection_middleware.rb
require 'net/http'
require 'json'
class BotDetectionMiddleware
IPASIS_URL = 'https://api.ipasis.com/v1/lookup'
BLOCK_THRESHOLD = 0.85
PRIVATE_RANGES = [
IPAddr.new('10.0.0.0/8'),
IPAddr.new('172.16.0.0/12'),
IPAddr.new('192.168.0.0/16'),
IPAddr.new('127.0.0.0/8'),
]
def initialize(app)
@app = app
@api_key = ENV.fetch('IPASIS_API_KEY')
end
def call(env)
request = ActionDispatch::Request.new(env)
ip = request.remote_ip
# Skip private/local IPs
return @app.call(env) if private_ip?(ip)
# Check IP risk
risk_data = lookup_ip(ip)
if risk_data && risk_data['risk_score'].to_f >= BLOCK_THRESHOLD
Rails.logger.warn("[BotDetection] Blocked #{ip} — score: #{risk_data['risk_score']}")
return [
403,
{ 'Content-Type' => 'application/json' },
[{ error: 'Access denied', reason: 'suspicious_ip' }.to_json]
]
end
# Attach risk data to request for downstream use
env['ipasis.risk_data'] = risk_data
@app.call(env)
end
private
def private_ip?(ip)
addr = IPAddr.new(ip)
PRIVATE_RANGES.any? { |range| range.include?(addr) }
rescue IPAddr::InvalidAddressError
false
end
def lookup_ip(ip)
uri = URI("#{IPASIS_URL}?ip=#{ip}")
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.open_timeout = 2
http.read_timeout = 3
req = Net::HTTP::Get.new(uri)
req['Authorization'] = "Bearer #{@api_key}"
response = http.request(req)
return nil unless response.is_a?(Net::HTTPSuccess)
JSON.parse(response.body)
rescue StandardError => e
Rails.logger.error("[BotDetection] API error: #{e.message}")
nil # Fail open — don't block on API errors
end
endRegister the middleware in your Rails application:
config/application.rb
require_relative '../app/middleware/bot_detection_middleware'
module YourApp
class Application < Rails::Application
# Insert early in the middleware stack
config.middleware.insert_before ActionDispatch::RemoteIp,
BotDetectionMiddleware
end
endStep 2: Add Redis Caching
Every IP lookup takes 20-50ms. For traffic-heavy apps, caching results in Redis eliminates redundant API calls and keeps latency under 1ms for repeat visitors.
app/middleware/bot_detection_middleware.rb — with Redis caching
require 'net/http'
require 'json'
require 'redis'
class BotDetectionMiddleware
IPASIS_URL = 'https://api.ipasis.com/v1/lookup'
BLOCK_THRESHOLD = 0.85
CACHE_TTL = 300 # 5 minutes
CACHE_PREFIX = 'ipasis:risk:'
def initialize(app)
@app = app
@api_key = ENV.fetch('IPASIS_API_KEY')
@redis = Redis.new(url: ENV.fetch('REDIS_URL', 'redis://localhost:6379/0'))
end
def call(env)
request = ActionDispatch::Request.new(env)
ip = request.remote_ip
return @app.call(env) if private_ip?(ip)
risk_data = cached_lookup(ip)
if risk_data && risk_data['risk_score'].to_f >= BLOCK_THRESHOLD
Rails.logger.warn("[BotDetection] Blocked #{ip} — score: #{risk_data['risk_score']}")
return blocked_response
end
env['ipasis.risk_data'] = risk_data
@app.call(env)
end
private
def cached_lookup(ip)
cache_key = "#{CACHE_PREFIX}#{ip}"
# Try cache first
cached = @redis.get(cache_key)
if cached
ActiveSupport::Notifications.instrument('bot_detection.cache_hit', ip: ip)
return JSON.parse(cached)
end
# Cache miss — call API
risk_data = lookup_ip(ip)
if risk_data
@redis.setex(cache_key, CACHE_TTL, risk_data.to_json)
ActiveSupport::Notifications.instrument('bot_detection.cache_miss', ip: ip)
end
risk_data
rescue Redis::BaseError => e
Rails.logger.error("[BotDetection] Redis error: #{e.message}")
lookup_ip(ip) # Fallback to uncached lookup
end
def blocked_response
[403, { 'Content-Type' => 'application/json' },
[{ error: 'Access denied', reason: 'suspicious_ip' }.to_json]]
end
# ... (private_ip? and lookup_ip methods from Step 1)
end💡 Cache TTL Strategy
5 minutes is a good default. Shorter TTL (60s) for login/payment pages where accuracy matters most. Longer TTL (15-30min) for general content pages where risk tolerance is higher.
Step 3: Route-Specific Protection with Controller Concerns
Not every route needs the same level of protection. Login pages need strict blocking, blog pages need light monitoring. Use a Rails concern to add granular control at the controller level.
app/controllers/concerns/bot_detectable.rb
module BotDetectable
extend ActiveSupport::Concern
RISK_TIERS = {
critical: 0.7, # Payments, password reset
high: 0.8, # Login, signup
standard: 0.85, # General authenticated routes
monitor: 1.0, # Public pages — log only, never block
}.freeze
included do
helper_method :ip_risk_data, :ip_risk_score
end
private
def require_low_risk(tier = :standard)
threshold = RISK_TIERS[tier] || RISK_TIERS[:standard]
score = ip_risk_score
if score && score >= threshold
Rails.logger.warn(
"[BotDetection] Controller block — IP: #{request.remote_ip}, " \
"score: #{score}, tier: #{tier}, action: #{action_name}"
)
render json: { error: 'Suspicious activity detected' }, status: :forbidden
end
end
def ip_risk_data
@ip_risk_data ||= request.env['ipasis.risk_data']
end
def ip_risk_score
ip_risk_data&.dig('risk_score')&.to_f
end
def ip_is_vpn?
ip_risk_data&.dig('is_vpn') == true
end
def ip_is_datacenter?
ip_risk_data&.dig('is_datacenter') == true
end
def ip_is_tor?
ip_risk_data&.dig('is_tor') == true
end
def ip_country
ip_risk_data&.dig('country_code')
end
endNow use the concern in your controllers with before_action:
app/controllers/sessions_controller.rb
class SessionsController < ApplicationController
include BotDetectable
before_action -> { require_low_risk(:high) }, only: [:create]
def create
# Login logic — only reached if IP passes risk check
user = User.find_by(email: params[:email])
if user&.authenticate(params[:password])
# Log the risk data alongside the login
AuditLog.create!(
user: user,
action: 'login',
ip_address: request.remote_ip,
risk_score: ip_risk_score,
is_vpn: ip_is_vpn?,
country: ip_country
)
session[:user_id] = user.id
redirect_to dashboard_path
else
flash[:error] = 'Invalid credentials'
render :new
end
end
endapp/controllers/registrations_controller.rb
class RegistrationsController < ApplicationController
include BotDetectable
before_action -> { require_low_risk(:high) }, only: [:create]
def create
@user = User.new(user_params)
# Enrich user record with risk data
@user.signup_risk_score = ip_risk_score
@user.signup_ip = request.remote_ip
@user.signup_country = ip_country
@user.flagged = ip_is_vpn? || ip_is_datacenter?
if @user.save
# High risk but below block threshold → require email verification
if ip_risk_score && ip_risk_score >= 0.5
UserMailer.verification_email(@user).deliver_later
redirect_to verify_email_path, notice: 'Please verify your email'
else
session[:user_id] = @user.id
redirect_to dashboard_path
end
else
render :new
end
end
endapp/controllers/payments_controller.rb
class PaymentsController < ApplicationController
include BotDetectable
# Critical tier — lowest threshold for payment routes
before_action -> { require_low_risk(:critical) }, only: [:create]
def create
# Payment processing — high confidence the IP is legitimate
# Additional checks: VPN + country mismatch
if ip_is_vpn? && ip_country != current_user.billing_country
Rails.logger.warn("[Fraud] VPN + country mismatch: #{request.remote_ip}")
return render json: { error: 'Please disable VPN for payments' },
status: :unprocessable_entity
end
# Process payment...
end
endStep 4: Circuit Breaker for API Resilience
If the IPASIS API goes down, you don't want every request hanging for 3 seconds waiting for a timeout. A circuit breaker detects failures and bypasses the API call entirely until it recovers.
app/middleware/circuit_breaker.rb
class CircuitBreaker
FAILURE_THRESHOLD = 5
RESET_TIMEOUT = 30 # seconds
def initialize
@failures = 0
@last_failure_at = nil
@state = :closed # :closed, :open, :half_open
@mutex = Mutex.new
end
def call
@mutex.synchronize do
case @state
when :open
if Time.now - @last_failure_at > RESET_TIMEOUT
@state = :half_open
else
Rails.logger.debug("[CircuitBreaker] Open — skipping API call")
return nil
end
end
end
result = yield
@mutex.synchronize do
@failures = 0
@state = :closed
end
result
rescue StandardError => e
@mutex.synchronize do
@failures += 1
@last_failure_at = Time.now
if @failures >= FAILURE_THRESHOLD
@state = :open
Rails.logger.warn("[CircuitBreaker] Opened after #{@failures} failures")
end
end
raise e
end
endIntegrate the circuit breaker into the middleware:
class BotDetectionMiddleware
def initialize(app)
@app = app
@api_key = ENV.fetch('IPASIS_API_KEY')
@redis = Redis.new(url: ENV.fetch('REDIS_URL', 'redis://localhost:6379/0'))
@circuit_breaker = CircuitBreaker.new
end
def lookup_ip(ip)
@circuit_breaker.call do
uri = URI("#{IPASIS_URL}?ip=#{ip}")
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.open_timeout = 2
http.read_timeout = 3
req = Net::HTTP::Get.new(uri)
req['Authorization'] = "Bearer #{@api_key}"
response = http.request(req)
raise "API error: #{response.code}" unless response.is_a?(Net::HTTPSuccess)
JSON.parse(response.body)
end
rescue StandardError => e
Rails.logger.error("[BotDetection] #{e.message}")
nil # Fail open
end
endStep 5: Active Job Background Scoring
For non-critical routes where you want risk data without adding latency, use Active Job to score IPs in the background. The score is stored and available for later decisions (e.g., flagging accounts for review).
app/jobs/ip_risk_scoring_job.rb
class IpRiskScoringJob < ApplicationJob
queue_as :default
retry_on StandardError, wait: :polynomially_longer, attempts: 3
def perform(ip_address, context = {})
risk_data = IpasisClient.lookup(ip_address)
return unless risk_data
# Store the result
IpRiskRecord.upsert({
ip_address: ip_address,
risk_score: risk_data['risk_score'],
is_vpn: risk_data['is_vpn'],
is_tor: risk_data['is_tor'],
is_datacenter: risk_data['is_datacenter'],
is_proxy: risk_data['is_proxy'],
country_code: risk_data['country_code'],
isp: risk_data['isp'],
checked_at: Time.current,
}, unique_by: :ip_address)
# Flag user if risk is high
if context[:user_id] && risk_data['risk_score'].to_f >= 0.7
user = User.find_by(id: context[:user_id])
user&.update!(flagged: true, flag_reason: 'high_risk_ip')
AdminNotifier.flag_alert(user, risk_data).deliver_later
end
end
endapp/services/ipasis_client.rb
class IpasisClient
BASE_URL = 'https://api.ipasis.com/v1'
def self.lookup(ip)
conn = Faraday.new(url: BASE_URL) do |f|
f.request :json
f.response :json
f.adapter Faraday.default_adapter
f.options.timeout = 5
f.options.open_timeout = 2
end
response = conn.get('lookup', { ip: ip }) do |req|
req.headers['Authorization'] = "Bearer #{ENV['IPASIS_API_KEY']}"
end
return nil unless response.success?
response.body
rescue Faraday::Error => e
Rails.logger.error("[IpasisClient] #{e.message}")
nil
end
endTrigger background scoring from any controller:
class DashboardController < ApplicationController
def show
# Score IP in background — no latency impact
IpRiskScoringJob.perform_later(
request.remote_ip,
{ user_id: current_user.id }
)
@stats = current_user.dashboard_stats
end
endStep 6: Risk-Aware Rate Limiting with Rack::Attack
Combine IPASIS risk scores with Rack::Attack for intelligent rate limiting. High-risk IPs get tighter limits, trusted IPs get more headroom.
config/initializers/rack_attack.rb
class Rack::Attack
# Read risk data attached by BotDetectionMiddleware
REDIS = Redis.new(url: ENV.fetch('REDIS_URL', 'redis://localhost:6379/0'))
# Dynamic rate limits based on IP risk
throttle('req/ip/risk', limit: proc { |req|
risk_data = req.env['ipasis.risk_data']
score = risk_data&.dig('risk_score').to_f
case
when score >= 0.7 then 10 # High risk: 10 req/min
when score >= 0.4 then 30 # Medium risk: 30 req/min
else 100 # Low risk: 100 req/min
end
}, period: 60) do |req|
req.ip
end
# Strict limit on login attempts
throttle('logins/ip', limit: 5, period: 300) do |req|
req.ip if req.path == '/login' && req.post?
end
# Strict limit on signup from high-risk IPs
throttle('signups/risky', limit: 2, period: 3600) do |req|
risk_data = req.env['ipasis.risk_data']
score = risk_data&.dig('risk_score').to_f
req.ip if req.path == '/signup' && req.post? && score >= 0.5
end
# Block Tor exit nodes and known abuse IPs entirely
blocklist('block/tor') do |req|
risk_data = req.env['ipasis.risk_data']
risk_data&.dig('is_tor') == true
end
# Custom blocked response
self.blocklisted_responder = lambda do |_req|
[403, { 'Content-Type' => 'application/json' },
[{ error: 'Access denied' }.to_json]]
end
endStep 7: Structured Logging and Monitoring
Add Rails instrumentation events to track bot detection performance. Feed these into your monitoring stack (Datadog, New Relic, or custom dashboards).
config/initializers/bot_detection_instrumentation.rb
ActiveSupport::Notifications.subscribe(/^bot_detection\./) do |name, start, finish, id, payload|
duration_ms = ((finish - start) * 1000).round(2)
Rails.logger.info({
event: name,
duration_ms: duration_ms,
ip: payload[:ip],
risk_score: payload[:risk_score],
blocked: payload[:blocked],
cached: name.include?('cache_hit'),
timestamp: Time.current.iso8601,
}.to_json)
# StatsD metrics (if using Datadog/StatsD)
if defined?(StatsD)
StatsD.increment("bot_detection.#{name.split('.').last}")
StatsD.measure("bot_detection.latency", duration_ms)
StatsD.increment("bot_detection.blocked") if payload[:blocked]
end
endStep 8: API Endpoint Protection
If you expose APIs (JSON endpoints, GraphQL, webhooks), protect them with risk-aware authentication.
app/controllers/api/v1/base_controller.rb
module Api
module V1
class BaseController < ActionController::API
include BotDetectable
before_action :check_api_risk
private
def check_api_risk
score = ip_risk_score || 0
# Datacenter IPs are common for APIs — adjust threshold
threshold = if ip_is_datacenter? && valid_api_key?
0.9 # Known API consumers from datacenters = OK
else
0.75
end
if score >= threshold
render json: {
error: 'Rate limited',
retry_after: 60,
reason: 'ip_risk_threshold_exceeded'
}, status: :too_many_requests
end
end
def valid_api_key?
api_key = request.headers['X-API-Key']
ApiKey.active.exists?(key: api_key)
end
end
end
endStep 9: Complete Production Setup
Here's everything wired together — the final middleware with Redis caching, circuit breaker, instrumentation, and Sidekiq-compatible background scoring:
app/middleware/bot_detection_middleware.rb — production-ready
require 'net/http'
require 'json'
class BotDetectionMiddleware
IPASIS_URL = 'https://api.ipasis.com/v1/lookup'
BLOCK_THRESHOLD = 0.85
CACHE_TTL = 300
CACHE_PREFIX = 'ipasis:risk:'
PRIVATE_RANGES = %w[10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 127.0.0.0/8]
.map { |r| IPAddr.new(r) }
def initialize(app)
@app = app
@api_key = ENV.fetch('IPASIS_API_KEY')
@redis = Redis.new(url: ENV.fetch('REDIS_URL', 'redis://localhost:6379/0'))
@circuit_breaker = CircuitBreaker.new
end
def call(env)
request = ActionDispatch::Request.new(env)
ip = request.remote_ip
return @app.call(env) if private_ip?(ip)
start_time = Process.clock_gettime(Process::CLOCK_MONOTONIC)
risk_data = cached_lookup(ip)
duration = Process.clock_gettime(Process::CLOCK_MONOTONIC) - start_time
ActiveSupport::Notifications.instrument('bot_detection.lookup', {
ip: ip,
risk_score: risk_data&.dig('risk_score'),
duration_ms: (duration * 1000).round(2),
blocked: risk_data && risk_data['risk_score'].to_f >= BLOCK_THRESHOLD,
})
if risk_data && risk_data['risk_score'].to_f >= BLOCK_THRESHOLD
return blocked_response(ip, risk_data)
end
env['ipasis.risk_data'] = risk_data
@app.call(env)
end
private
def cached_lookup(ip)
cache_key = "#{CACHE_PREFIX}#{ip}"
cached = @redis.get(cache_key)
return JSON.parse(cached) if cached
risk_data = @circuit_breaker.call { fetch_from_api(ip) }
@redis.setex(cache_key, CACHE_TTL, risk_data.to_json) if risk_data
risk_data
rescue Redis::BaseError
@circuit_breaker.call { fetch_from_api(ip) } rescue nil
end
def fetch_from_api(ip)
uri = URI("#{IPASIS_URL}?ip=#{ip}")
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.open_timeout = 2
http.read_timeout = 3
req = Net::HTTP::Get.new(uri)
req['Authorization'] = "Bearer #{@api_key}"
response = http.request(req)
raise "API #{response.code}" unless response.is_a?(Net::HTTPSuccess)
JSON.parse(response.body)
end
def private_ip?(ip)
addr = IPAddr.new(ip)
PRIVATE_RANGES.any? { |range| range.include?(addr) }
rescue IPAddr::InvalidAddressError
false
end
def blocked_response(ip, data)
Rails.logger.warn("[BotDetection] Blocked #{ip} — #{data['risk_score']}")
[403, { 'Content-Type' => 'application/json' },
[{ error: 'Access denied', reason: 'suspicious_ip' }.to_json]]
end
endBonus: Devise Integration
If you use Devise for authentication, add IPASIS checks to the Devise controllers:
app/controllers/users/sessions_controller.rb
class Users::SessionsController < Devise::SessionsController
include BotDetectable
before_action -> { require_low_risk(:high) }, only: [:create]
def create
super do |user|
# Log risk data on successful login
SignInAudit.create!(
user: user,
ip: request.remote_ip,
risk_score: ip_risk_score,
is_vpn: ip_is_vpn?,
is_datacenter: ip_is_datacenter?,
country: ip_country,
user_agent: request.user_agent
)
# Force 2FA for high-risk logins
if ip_risk_score && ip_risk_score >= 0.5 && !session[:otp_verified]
sign_out(user)
session[:pending_2fa_user_id] = user.id
redirect_to two_factor_path and return
end
end
end
endBonus: Webhook Protection
app/controllers/webhooks_controller.rb
class WebhooksController < ApplicationController
include BotDetectable
skip_before_action :verify_authenticity_token
before_action :verify_webhook_source
def stripe
# Process Stripe webhook...
end
def github
# Process GitHub webhook...
end
private
def verify_webhook_source
# Known webhook sources are datacenter IPs — that's expected
# But unknown datacenter IPs sending webhooks = suspicious
return if known_webhook_ip?(request.remote_ip)
if ip_is_datacenter? && ip_risk_score && ip_risk_score >= 0.6
Rails.logger.warn("[Webhook] Unknown datacenter IP: #{request.remote_ip}")
head :forbidden
end
end
def known_webhook_ip?(ip)
# Stripe, GitHub, etc. publish their IP ranges
WebhookAllowlist.includes?(ip)
end
endPerformance Benchmarks
| Setup | Avg Latency | p99 Latency | API Calls/hr (1K RPM) |
|---|---|---|---|
| No cache | 25-50ms | 120ms | 60,000 |
| Redis cache (5min TTL) | <1ms (hit) / 30ms (miss) | 45ms | ~2,000 |
| Redis + circuit breaker | <1ms (hit) / 0ms (open) | 30ms | ~2,000 |
| Background scoring only | 0ms (async) | 0ms | ~60,000 |
Nginx / Reverse Proxy Configuration
Make sure your Nginx config forwards the real client IP to Rails:
# nginx.conf
upstream rails_app {
server unix:///var/run/puma.sock;
}
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://rails_app;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}
}What's Next?
You now have a production-ready bot detection layer in your Rails app. Here are some ways to extend it:
- Add email risk checking — Combine IP risk with disposable email detection for signup protection
- Build a risk dashboard — Query
IpRiskRecordto visualize traffic patterns and blocked IPs - Set up alerts — Use Active Job + ActionMailer to notify your team when blocked requests spike
- Add device fingerprinting — Layer client-side fingerprinting with server-side IP intelligence for maximum accuracy
Ready to protect your Rails app?
Get started with 1,000 free API lookups per day. No credit card required. Full API access from day one.
Get Your Free API Key →