Case Study

Raposa.ai

Building a cloud security incident platform that could ingest AWS telemetry safely, organize it by customer, and create a credible path from raw events to AI-assisted review.

Visit raposa.ai

ProblemCloud security telemetry is noisy, multi-step incident handling is slow, and most teams lack a clean bridge from detection to guided response.
SolutionA cloud-native SaaS with organization-scoped onboarding, signed event ingestion, secure storage, and a path to Bedrock-backed review and remediation guidance.
StackAWS Lambda, Cognito, DynamoDB, S3, SES, Route 53, Bedrock, Next.js, API-key and HMAC-based event ingestion.
OutcomeA credible operator workflow for turning raw CloudTrail events into structured review, collaboration, and compliance-friendly incident handling.

Why this mattered

Raposa.ai treated AI as part of a secure operator workflow, not as a layer pasted on top of raw telemetry.

The Problem

The hard part of cloud security is rarely collecting logs. The hard part is getting from raw telemetry to an operationally useful incident workflow. Teams can already generate endless signal from AWS, but they still struggle with customer isolation, secure ingestion, triage, collaboration, and the audit trail needed to show that something was actually reviewed and handled.

The Market Gap

There is no shortage of security tools that generate alerts, dashboards, and telemetry. The gap is what happens after detection. Smaller security teams and cloud-heavy startups often end up with fragmented tooling: one system for logs, another for tickets, another for collaboration, and very little help turning raw cloud events into a structured response workflow.

That creates space for a product like Raposa. The opportunity was not to become yet another alert stream. It was to sit between cloud telemetry and incident handling, using AI where it helped summarize context, guide remediation, and create compliance-friendly records of what happened and what the team did next.

Who It Was For

The most obvious customers were startups and small-to-mid-sized companies running serious workloads on AWS without a large in-house security operations function. These are the teams that still need better cloud visibility, response discipline, and compliance evidence, but do not have the headcount or budget to stitch together a heavyweight SOC stack.

There was also a strong fit for regulated or compliance-sensitive buyers: SaaS companies facing enterprise security reviews, fintech and health-adjacent products that needed clearer incident handling, and teams under ISO 27001, SOC 2, or similar pressure. For those customers, the value is not only finding suspicious events. It is having a defensible workflow around triage, remediation, and documentation.

The Product Shape

Raposa was designed as a cloud-native SaaS for managing organizations, onboarding users, issuing API credentials, and ingesting AWS CloudTrail data. Each organization had its own management layer, membership model, API credentials, and event namespace. That matters because the architecture was not just about analysis. It was about building the control plane that makes multi-tenant security workflows possible in the first place.

How The System Worked

Authentication and user lifecycle were handled through Cognito. Organization and membership data lived in DynamoDB. Invitation and product email flows ran through SES. Incoming CloudTrail events were sent to a dedicated ingestion endpoint and verified with API key plus HMAC signature before being accepted. Once validated, events were written to S3 under organization-scoped paths, which creates a cleaner separation model for downstream processing, retention, and later analysis.

On the AI side, the platform integrated with AWS Bedrock and tracked model usage, model selection, fallback behavior, and token consumption. That is an important design choice because it treats LLM capability as part of the production system, not an unmetered black box. The architecture also anticipated analysis results storage, monitoring, and operator review flows rather than stopping at raw inference.

Architecture Flow

System path

Identity, tenancy, signed ingestion, and analysis all had to connect cleanly for the product to be trustworthy.

01

Organization Setup

Customers onboard through Cognito, create an organization, and receive scoped credentials.

02

Signed Ingestion

CloudTrail events are sent to the ingestion API and verified with API key plus HMAC signature.

03

Secure Storage

Validated events are written into private S3 paths partitioned by organization for later processing.

04

Review And Guidance

Bedrock-backed analysis and product workflows turn raw events into operator-facing review and remediation.

The platform effectively had two halves. The first half was the SaaS control plane: Cognito for authentication, DynamoDB for users, organizations, memberships, and usage data, SES for invitations and account email, and a Next.js frontend for organization management and onboarding. The second half was the event and analysis path: external CloudTrail events arriving through an ingestion API, validation with API key plus HMAC signature, S3 storage under organization-specific paths, and downstream analysis capability using Bedrock and supporting infrastructure.

That split matters because many security tools jump straight to “analyze the data.” Raposa had to solve the earlier problems first: who owns the data, how events are tied back to an organization, how API credentials are issued safely, and how the product keeps one customer’s telemetry and usage clearly separated from another’s.

Why It Was Designed This Way

Several decisions in the codebase make more sense when you look at them as product and operational choices, not just engineering preferences. The MVP was intentionally shaped around a single organization per user. That may sound limiting, but it reduces a lot of complexity early: no organization switcher, fewer ambiguous permission edges, simpler onboarding, and a much clearer mental model for customers who just want to connect one environment and start reviewing events.

The signed ingestion model follows the same logic. Security event pipelines are not the place for casual trust boundaries. Using API keys plus HMAC signatures gave Raposa a cleaner way to verify event provenance and tie requests back to a specific organization before anything was persisted. In practice, that helps both security and tenancy because the system can reject bad input before it pollutes the event store.

Operator Workflow

On the frontend side, the platform already exposed an organization management workflow with organization details, member invitations, and API key handling. That is important context for the case study because the security product was not only an internal processing pipeline. It was being shaped into a usable SaaS surface where a customer could onboard, manage access, copy credentials, and eventually review findings and billing in one place.

The event review model also had commercial and operational intent behind it. Usage tracking and Stripe integration were built into the service layer, which shows the design was thinking beyond a prototype toward a real product with review quotas, subscription tiers, and traceable usage.

Security And Compliance Decisions

Several design choices are worth calling out. Event ingestion was signed rather than left as an open webhook. Events were written to private S3 buckets with encryption, versioning, and lifecycle controls. Analysis results had their own bucket and access policy. Usage and audit-like data lived in DynamoDB, which keeps a clearer separation between operational records and the raw event store. Even the Bedrock integration was designed with explicit model settings, fallback behavior, token accounting, and cost monitoring instead of treating inference as a free side effect.

From a compliance perspective, that matters because the platform was being shaped around traceability, tenancy boundaries, credential handling, and evidence-producing workflows. It is a more serious foundation than simply piping logs into a model and hoping the output is useful.

Why This Was Interesting

The technical challenge was not only to analyze security events. It was to make the whole path trustworthy: who is allowed to send events, how those events are partitioned, where they are stored, how organizations and memberships are modeled, and how AI-generated guidance fits into a system that still needs operator confidence and compliance evidence.

A lot of AI security products start with the model and work backward. Raposa was more interesting because the stronger question was how to build the ingestion, identity, storage, and review architecture so that AI could be useful without becoming the least trustworthy part of the stack.

Technical Tradeoffs

The repo shows a pragmatic mix of serverless and containerized thinking. The SaaS API surface was built around Lambda and HTTP APIs, which is a good fit for organization management, invitation flows, and event ingestion. At the same time, the infrastructure also included an ECS-based service path for heavier CloudTrail summarization workloads. That hybrid approach is a sensible response to the reality that not every part of an AI or analysis system fits the same execution model.

There was also a clear evolution in the AI strategy itself. The platform had an ECS-backed summarization path, but also a Bedrock integration plan and service layer designed to move inference toward managed models. That is exactly the kind of transition mature platforms make: start where you need control, then move toward simpler managed infrastructure when it improves cost, operations, or speed of delivery.

Usage tracking is another example of design intent. The LLM capability was not treated as a magical extra feature. It was wired into usage services, tier limits, token accounting, and billing flows. That matters because it shows the system was being built as a real product with commercial and operational constraints, not as a lab experiment.

There is also an honest product-stage reality here: some parts were already integrated and working, while others were still being shaped. That is not a weakness in the case study. It is part of what makes the work meaningful. The value was in shaping a platform that could evolve from secure event capture and control-plane foundations into richer analysis and incident workflows without needing to rebuild the core model.

What It Demonstrates

This project shows the kind of work I like most: taking a security-heavy problem, shaping the product and platform architecture together, and making room for AI where it supports the operator instead of replacing judgment. It sits at the intersection of SaaS control-plane design, secure cloud ingestion, compliance thinking, and production AI integration.

It also reflects how I work: not as a “prompt engineer,” but as someone designing the surrounding system that makes AI usable in production. That includes identity, storage, tenancy, cloud infrastructure, cost controls, monitoring, and the human workflow the product has to support.