A sovereign, modular platform that collects, processes and certifies terabytes of data and billions of transactions every day. Your own data, not the one reported by the operators. No leakage. No dispute. No excuse.
Snype is a modular platform, designed to be deployed according to your priorities. Whether your most pressing need is revenue assurance, fraud detection, mobile money oversight or VAT collection — you start with what matters most, and expand from there.
There is no need to start a new project every time you need a new capability. No new data pipeline, no new integration, no new vendor.
We collect raw, unmediated data directly from every operator's core network and core banking elements, in near real-time. Not the data they report to you. Not pre-processed exports. Not summaries. And we make sure every single file is received, decoded and processed — nothing is left behind.
Each module is built on the same core: one platform, one architecture, one set of operational procedures, one documentation. Only one tool to learn, only one system to trust.
Each module embeds its own quality controls, so you always know you are working with the right data. And each additional module you deploy enriches and reinforces the ones already in place:
The result: end-to-end certified data. No leakage. No blind spot. No dispute.
Our solution is designed to run wherever your infrastructure and regulatory framework require it.
It all depends on your requirements, your constraints, your available infrastructure, and your budget.
But the architecture is the same. The modules are the same. The certified data is the same. The choice is yours.
| On-premise | Private Cloud | Public Cloud (AWS, Azure, GCP) | |
|---|---|---|---|
| Data Sovereignty | Full control. Data never leaves your premises | Depends on cloud location. In-country private cloud ensures sovereignty. | Data may transit or be stored outside the country. Requires specific contractual guarantees. |
| Security & Access | Physical security under your control. Key-locked racks, IP cameras, full server hardening. | Shared responsibility model. You manage application security, cloud provider manages infrastructure. | Shared responsibility model. Less control over physical infrastructure. |
| Cost Predictability | Fixed annual engagement. No per-GB, per-query, or per-user fees. | Monthly cloud costs. Variable depending on usage, storage, and compute. | Monthly cloud costs. Can be unpredictable at scale — per-query, per-GB, per-instance billing. |
| High Availability | Your responsibility. Snype provides active-active geo-redundancy with automated failover and 8h UPS. | Managed by cloud provider. SLA-dependent. | Managed by cloud provider. SLA-dependent. |
| Deployment Speed | 15 days typical — includes hardware procurement and installation. | Faster initial setup if cloud infra is available. Historical data load still constrained by bandwidth. | Fastest initial setup. But sustained performance depends on instance sizing and cost. |
| Bandwidth Requirement | Local network — no constraint. Critical for loading 3+ years of historical data (5–10 TB/day during catch-up). | Depends on connectivity to cloud. Sufficient bandwidth required for initial data load. | Upload of 5–10 TB/day requires sustained high-bandwidth connection. |
| Disk I/O Performance | Full control. NVMe disks deliver 1–2 GB/s sustained write — required for real-time CDR processing at scale. | Depends on cloud tier. Dedicated storage can meet requirements, but at premium cost. | Shared storage tiers rarely sustain 1–2 GB/s write. Dedicated instances required — significantly higher cost. |
| Storage Capacity | 500 TB+ compressed, always queryable. Expandable at hardware cost only. | Scalable, but storage costs accumulate monthly. | Per-GB pricing. At 500 TB, storage costs alone can exceed $100K+/year. |
All data sources don't bring the same value. We understand every market has its own history. Some regulators have built CDR-sharing agreements with operators over the years. Some have invested in probes. Some rely on periodic reports. Snype integrates every available source: raw un-mediated files, operator-formatted files, probe captures, regulatory reports, financial exports. And where gaps remain, we work with operators and banks to complete the picture. Every source is cross-checked against the others. The more you have, the stronger the certification.
| Non-intrusive collection of raw unmediated files |
Detailed exports pre-formatted by the Operator |
Active probes | Call Generators, Crowd Sourcing & Drive Tests |
Operator's own dashboards |
|
|---|---|---|---|---|---|
| Data Sources | CDR, IPDR, Transactions, logs, snapshots — from core network elements, mobile money platforms, core banking. | Files extracted, restructured and provided by the operator on request or schedule. | Network traffic captured at specific points in the operator's network. | Tests calls, ad-hoc measurements, drive tests | Aggregated reports, summaries, pre-formatted exports. |
| Legal Risk & Liability |
Non-intrusive, no risk, no liability. |
No physical liability, but operator can argue that extraction and formatting efforts divert resources from core operations. |
Hardware on operator premises. Any incident can be blamed on regulator's equipment. |
Traffic injected into live network. Operator can claim interference or risk. |
No risk. No liability. |
| Data Availability |
Full functional coverage. Generated natively. Operators already collect them. No extra effort |
Requires operator to develop dedicated mechanisms to extract, decode & format |
Requires physical access & hardware installation at operator premises |
No operator dependency. Requires deploying test equipment and planning each campaign independently. |
Operator defines where, when and in what format to report |
| Data Integrity |
Unmediated data. Full traceability. |
Operator filters and applies business rules - no independent verification |
Raw capture |
Verifiable and tamper-proof but only reflects test conditions. |
Operator controls all business rules and aggregation level |
| Data Exhaustivity |
Every expected file tracked. Missing, late, or corrupted files detected automatically. Consistency checks at all steps. |
No traceability. Only possible verification is cross-checking every transaction accross multiple data sources (assuming they are available) |
Blind to any traffic that bypasses the probe. Configuration changes and equipment upgrades create gaps — almost never anticipated in time. |
Only covers a sample of the transactions for a fraction of the time |
No possiblity to control. |
| Data History |
Operators are legally required to retain these records. Multiple years of historical data can be retrieved and processed |
Extracting and reformatting years of history requires operator resources — rarely available or prioritized |
Only captures from install date - no prior history available. |
No prior history available. |
Historical reports exist but formats may vary over time |
| Resilience |
In case of an incident, data can be recovered and processed. |
Relies heavily on operator's commitment and resources. |
In case of an incident, data is not recoverable. |
No test means no data. |
No control from regulator. |
| Scalability |
Evolutions automatically integrated into existing data flows. |
Relies heavily on operator's commitment and resources. |
Requires hardware investment and installation. In case of an unanticipated evolution, data is not recoverable. |
Requires more hardware and more resources. |
No control from regulator. |
We've been applying AI to real-world data for over a decade — starting with churn prediction models for telecom operators, then expanding into time series forecasting, customer segmentation, scoring, anomaly detection, similarity matching, and computer vision for identity verification. These models run in production, on real data, at scale. Today, we're adding another layer: large language models integrated into our operational workflows — helping our teams analyze faster, configure smarter, and catch what would otherwise be missed. The AI agents assist. Our team decides.
Many big data platforms rely on clusters that require dozens — sometimes hundreds — of servers. The impact is not just technical. It is financial and operational.
More servers mean more spare parts, more rack space, more air conditioning, more electrical infrastructure, more batteries, more generators. In many environments, this simply isn't viable.
Our solution was engineered from the ground up to minimise hardware requirements without compromising scale or performance.
| Per Server * | Snype (5-10 servers) |
Generic A (~50 servers) |
Generic B (~100 servers) |
|
|---|---|---|---|---|
| Average instant consumption | 450W | ~4.5 kW | ~22.5 kW | ~45 kW |
| Monthly energy (server + cooling) | 468 kWh | 4,680 kWh | 23,400 kWh | 46,800 kWh |
| Monthly energy cost (at $0.10/kWh) | $47 | $470 | $2,350 | $4,700 |
| AC investment | $180 | $1,800 | $9,000 | $18,000 |
| UPS + Batteries | $300 | $3000 | $15,000 | $30,000 |
| Generator investment | $400 | $4,000 | $20,000 | $40,000 |
| Total cost of ownership — Year 1 | $14,440 | $72,450 | $144,160 | |
| Total cost of ownership — Year 2 | $16,125 | $100,650 | $200,320 |