Knowledge Base

Frequently Asked Questions

🎯

Selecting a Revenue Assurance Solution

Most revenue assurance solutions on the market are designed for operators — their goal is to help an operator maximize its own revenue and reduce internal billing leakage. These tools are built to serve the operator's commercial interests.

A regulatory revenue assurance solution serves a fundamentally different purpose: enabling an independent authority to verify what operators declare, detect fraud or under-reporting, and ensure tax and levy compliance.

Key Differences
Data access model: operator tools run inside the operator's own systems; regulatory tools must collect data from multiple operators simultaneously, without impacting operator networks.

Governance model: operator tools are managed by the operator; regulatory tools must be managed independently.

Output format: regulatory tools must produce legally defensible audit trails, not just operational dashboards.

Cross-check capabilities: regulatory tools perform cross-verifications to secure global consistency.

Business rules consistency: regulators must ensure the same business rules are applied identically across all operators for effective benchmarking.

When evaluating a revenue assurance solution, regulators must prioritize 5 non-negotiable criteria:

1 — Vendor Independence
Your monitoring solution must come from a vendor with no commercial or capital relationship with the operators or banks you regulate. If a carrier or an operator holds shares in your solution provider — or if your provider already has larger contracts with those operators or their parent companies — a major conflict of interest exists. When the time comes to detect discrepancies and apply penalties, whose side will they be on?
2 — Non-Intrusiveness
A regulatory tool must not require physical network probes or hardware installation inside the operator's infrastructure. Operational gateways and probe-based solutions create operational dependencies, introduce network risks, expose the regulator to legal liability, and raise data sovereignty concerns.
3 — Data Certification
The solution must guarantee end-to-end data quality, from collection to reporting. This means systematic controls at every stage — ingestion, decoding, processing, storage. It also means consistency controls across data sources and operators. Without certified data, any KPI produced is contestable, and any regulatory decision based on it is exposed.
4 — Data Auditability
The solution must produce verifiable, time-consistent data capable of withstanding legal and financial audits. This means a clear data lineage from raw transaction to reported KPI.
5 — Data Sovereignty
The data is yours, and it stays in your country. The solution must be deployed wherever you require it — on your own infrastructure, in a national data center, or in a sovereign cloud of your choice. Regulatory data is strategic national data; it cannot sit on servers you do not control or under legal jurisdictions you did not choose.

A non-intrusive monitoring approach collects structured data files generated by the operator's own systems — primarily call detail records (CDRs), billing records, top-up transactions, and data usage logs — without inserting any hardware or software component into the operator's live network.

This matters for several reasons:

Operational Safety
The operator's network continues to function normally. No gateway or probe can cause outages or performance degradation.
Legal Defensibility
Data collected from structured files is clean, attributable, and admissible in audit or judicial proceedings.
Operator Cooperation
Collecting raw, unaltered files does not require special technical access or goodwill from the operator.
Technology Agnosticism
The same approach works regardless of whether the operator uses 2G, 4G, 5G, or any future standard — file formats may change but the collection method remains consistent.

Every telecom operator's network is an ecosystem of equipment from different vendors — each generating data in its own proprietary format. A single operator may have 10 to 50 distinct data sources, with different file formats, field names, encoding standards, and generation frequencies.

The temptation for many RegTech providers is to ask operators to submit data in a standardized format. This approach has two fundamental problems: it compromises data integrity (the moment an operator transforms its data before submission, traceability is broken), and it shifts the burden onto the operator (who can refuse, delay, or request compensation at each network upgrade).

The right approach is to collect raw, unaltered data using an abstract data model — a standardized internal representation that maps each operator's proprietary formats into a common schema, without requiring operators to change anything on their end. A well-designed abstract model has four essential properties:

Vendor-Agnostic
It works regardless of which equipment manufacturer the operator uses — Ericsson, Huawei, Nokia, or any other.
Version-Resilient
It remains stable across upgrades to the operator's systems, so a core network evolution does not break existing analyses or require renegotiation with the operator.
Enrichable
Additional data fields can be incorporated without disrupting existing logic or requiring a rebuild of the analytical layer.
Cross-Operator Comparable
The same KPI means the same thing whether applied to Operator A or Operator B — enabling genuine market-wide analysis and benchmarking.

Before committing to a platform, a regulatory authority should assess data quality across five dimensions:

Completeness
Does the platform capture 100% of transaction records, or does it rely on sampling? For revenue assurance and fraud detection, sampling is insufficient — it systematically underestimates leakage and can miss low-volume but high-value fraud patterns that only become visible when the full dataset is examined.
Consistency
Are the same KPIs calculated identically across all operators and all time periods? Inconsistent calculation methods make cross-operator comparison unreliable and expose the regulator to challenges from operators who can point to methodological discrepancies.
Data Collection Controls
What automated controls does the platform apply to detect and flag anomalies in incoming data — missing files, truncated records, format inconsistencies, or implausible values? Ask the vendor to demonstrate the controls applied at each stage of the data pipeline, and how anomalies are flagged, investigated, and resolved.
Traceability
Can any aggregate figure be traced back to its underlying source records? A platform that can show a revenue figure but cannot show the individual CDRs that produced it is not audit-ready. When an operator contests a figure, the regulator must be able to produce the full evidentiary chain — from the certified total down to the individual transaction.
Data Certification
What consistency controls are applied across data sources and across operators? Is usage consistent across all platforms? Across all operators? Is the remaining balance consistent with the transactions? Are these controls performed at each subscriber and transaction level? Certification is what transforms a data point into evidence that can withstand legal or regulatory challenge.

For regulatory findings to be legally defensible — enforceable through fines, licence revocations, or tax assessments — the underlying data must meet a high standard of auditability:

Chain of Custody
Clear documentation of how data moved from the operator's system to the regulatory platform: collection timestamp, transfer method, and validation steps applied at each stage.
Immutability
Raw source data must be stored in a form that cannot be retroactively modified. Any transformation or enrichment applied to the data must be documented and reversible — it must always be possible to reproduce the original source record.
Reproducibility
Any KPI, finding, or revenue calculation produced by the platform must be reproducible: given the same source data and the same calculation rules, the system must produce the same result every time. This is essential for withstanding challenge in audit or legal proceedings.
Technology Independence
The data model must remain stable as operators upgrade their systems. A finding made in 2023 must be comparable to one made in 2026, even if the operator changed their core network platforms in between. This requires an abstract data layer that normalizes records regardless of source technology.

Regulatory fraud detection differs from operator-side fraud detection: the regulator is not trying to protect one operator's revenue. It is trying to ensure the integrity of the entire sector and protect consumers.

International Bypass Fraud
International calls are illegally terminated as local calls using SIM cards connected to GSM gateways. This allows fraudsters to pocket the international interconnect revenue that should flow to the licensed operator — and therefore to the state through taxes on that revenue.
Local Interconnection Fraud
Manipulation of interconnect settlement records between operators to reduce amounts owed. Cross-operator reconciliation — comparing the CDRs of the originating and terminating operator for the same calls — reveals record manipulation.
Revenue Under-Reporting
Operators may selectively omit certain call types, data services, or prepaid top-up transactions from their declared revenue. Independent CDR analysis, comparing raw transaction volumes to declared revenue, surfaces these discrepancies automatically.
Scam
Scam operations exploit telecom infrastructure to defraud end users at scale: bulk SMS campaigns, robocall schemes, voice phishing (vishing), and wangiri (missed call) fraud. From a regulatory standpoint, the concern is twofold — protecting consumers and detecting whether operators are knowingly facilitating or profiting from scam traffic.
Subscription Fraud
Fraudulent activation of subscriber accounts carried out with the complicity of registration agents, accepting undue fees from fraudsters in exchange for issuing SIMs that bypass proper identity verification. Behavioral analytics on post-activation usage patterns — dormant SIMs, geographic clustering of activations by a single agent, abnormally short time between activation and first use, activation volumes statistically inconsistent with declared market conditions — flag suspicious agents and accounts for investigation.

Deployment timelines vary based on the number of operators, the diversity of data sources, and the degree of operator cooperation:

Phase 1 — Initial Deployment (4 to 8 weeks)
Collection of CDRs and primary data sources from the first operator, normalization into the abstract data model, validation of data completeness, and delivery of the first operational dashboard. This phase is designed to produce immediate value — the first fraud detections and revenue reconciliation results — within weeks, not months.
Phase 2 — Full Operator Rollout (3 to 6 months)
Onboarding of additional operators, integration of supplementary data sources (VAS, mobile money, roaming records), and calibration of fraud detection thresholds based on observed traffic patterns.
Phase 3 — Continuous Optimization (ongoing)
Refinement of detection rules as operator tactics evolve, addition of new analytics modules, and training of regulatory staff on advanced use of the platform.

A well-structured deployment contract should specify fixed timelines and fixed costs — the regulatory authority should not bear the risk of open-ended implementation delays.

This is a practical concern for many regulators. The answer depends on the deployment model:

Managed Service Model
The platform vendor operates the data collection, processing, and quality assurance infrastructure, and delivers pre-processed dashboards and alerts to the regulatory authority. The regulator's team needs to understand how to interpret outputs and investigate alerts — no data engineering expertise required.
Hybrid Model
The vendor manages the technical infrastructure; the regulatory authority's team is trained to run analyses, generate reports, and configure alert thresholds independently. This builds internal capacity over time while maintaining vendor support for complex issues.
Full Ownership Model
The regulatory authority takes full ownership of the platform after initial deployment and training. This maximizes independence but requires investment in building an internal technical team.

For most regulators, the managed or hybrid model is the right starting point. The priority is generating reliable data and actionable insights quickly — capacity building can happen in parallel.

Best Practices

Almost every vendor in this space claims real-time capabilities. It is worth asking: real-time for whom, and for what purpose?

Real-time processing can be critical for operators themselves — to detect a cell outage, a network failure, or an interconnection drop the moment it happens. A regulator's mandate is fundamentally different: the regulator's job is to verify that operators comply with their licence obligations on the basis of certified figures. Any number a regulator puts forward must be defensible to at least 99.9% accuracy. Whether that verification happens at D+1, D+10, or at month-end changes very little in practice.

Beyond timing, several reasons make rushing to real-time output not only unnecessary but counterproductive:

Configuration cannot always be automated
A number of analyses require configuration steps that can be partly automated — but not always. Is this new bundle data-only or voice and SMS? Is this new trunk used for interconnection? These questions may require exchanges with operators before a configuration can be validated.
Data quality takes precedence over speed
Sources must be cross-referenced — and they may arrive at different speeds. Duplicates must be handled. Operators who experienced an operational issue must be followed up with. None of this is compatible with raw, unverified real-time reporting.
Guaranteed completeness over flashy latency
A platform that processes data continuously but cannot guarantee completeness is worse than one that delivers verified daily batches. A missing day of CDRs — silently dropped rather than flagged — can produce a revenue undercount that goes undetected for months. The right standard is not "real-time" but "complete, verified, and available at D+1."
Real-time as a distraction
Vendors who lead with real-time capabilities are often solving the operator's problem, not the regulator's. When evaluating a platform, ask not how fast it processes data, but how it guarantees that all data has been received, validated, and correctly integrated — and what happens when it has not.

Test Call Generation (TCG) involves placing calls from local or international networks and verifying they are correctly routed to local subscribers. It is a useful diagnostic tool, but has significant limitations as a primary fraud detection mechanism:

TCG covers only one type of fraud
Test calls can only detect SIM box bypass fraud. They are blind to every other form of abuse circulating on the network — subscription fraud, money laundering, tampered or re-flashed handsets, international revenue share fraud, Wangiri, and more.
TCG covers only a fraction of the routes
To test every possible route, a TCG platform would need to be interconnected with every operator on the planet — landlines, mobile networks, calling card providers, and VoIP services. In practice, only a small subset of routes is ever covered, leaving the majority of bypass traffic untested and invisible.
TCG can be detected by fraudsters
Sophisticated fraudsters can identify test calls and route them legitimately while continuing to bypass ordinary traffic. The regulator sees clean results; the fraud continues undisturbed.

CDR Analysis solves all 3 limitations. Continuous analysis of Call Detail Records examines every call, on every route, from every subscriber — with no sampling and no blind spots. Behavioural pattern detection identifies SIM box operators reliably, even when they attempt to mimic human traffic. And because CDRs carry the full signature of every transaction, the same data simultaneously exposes other fraud families that TCG cannot reach: subscription fraud, money laundering, IMEI tampering, premium-rate abuse, and interconnect manipulation. One data source, one analytical engine, every fraud type.

Two approaches are commonly used to verify that operators apply the tariffs they have declared: Test Call Generation and CDR analysis. They differ radically in coverage, depth, and evidentiary value.

TEST CALL GENERATION: A NARROW AND PREDICTABLE PROBE

Test calls can verify a tariff, but only within a tightly bounded perimeter:

  • Limited sampling of SIMs and offers. Promotions often apply only at certain hours, in certain regions, on certain customer segments, or for certain handset types. A TCG platform tests a handful of SIMs — it cannot replicate the full diversity of the subscriber base.
  • Limited destination coverage. Testing international tariffs requires someone to answer the call on the other end. In practice, only a small subset of destinations is ever tested.
  • Limited roaming coverage. Verifying roaming tariffs requires physical SIMs in every foreign country of interest — operationally impractical beyond a handful of partners.
  • Blind to what wasn't anticipated. TCG only tests the scenarios the regulator thought to script. Any tariff rule outside that script — a new promotion, a regional offer, a time-of-day variation — escapes verification.
CDR ANALYSIS: SYSTEMATIC, EXHAUSTIVE, REPRODUCIBLE

Tariff verification against CDRs examines what actually happened, across every subscriber and every transaction:

  • Full population coverage. Every subscriber, every time-of-day window, every region, every tariff plan, every destination — systematically, with no sampling.
  • Every transaction type. Top-ups, bundle purchases, voice, SMS, data consumption, and every Value Added Service — all verified against the operator's published tariffs and declared rules.
  • Drill-down to the individual subscriber. When a discrepancy is detected in aggregate, the regulator can descend to the individual CDR to understand exactly why the billed amount diverges from the expected amount — tariff misapplication, promotion not honoured, rounding error, or deliberate over-billing.
  • Retrospective verification. Because CDRs are retained over years, any past period can be re-analysed against the tariffs that were in force at that time — essential for handling historical disputes or auditing past promotions.
  • Subscriber-level complaint handling. When a customer files a complaint, the regulator can reconstruct that subscriber's own consumption history from the raw records and adjudicate the dispute on factual ground — not on the operator's word.

TCG tells you whether one scripted call was priced correctly. CDR analysis tells you whether every call, every SMS, every megabyte, and every mobile money transaction of every subscriber was priced correctly — and lets you prove it, years later, down to the individual record.

Not necessarily.

It does not eliminate bypass risk
Operators can still establish their own VoIP interconnections independently, which means a centralized gateway does not inherently solve the problem it is meant to address.
It exposes the regulator to operational liability
Any technical failure or service disruption — however minor — would fall under the regulator's direct responsibility, creating a risk profile that is difficult to manage for a supervisory authority.
It may create a conflict of interest
If the company selected to operate the gateway has direct ties to a local operator or an international carrier active in the same market, the regulator's independence and neutrality could be called into question.
A CDR-based approach offers structurally better coverage
Unlike a gateway, which only sees traffic that transits through it, a CDR-based solution sees every call, from every operator, across every route — domestic, international, and interconnect. Systematic consistency controls across operators, switches, and billing systems detect any missing, late, or manipulated record, making bypass and under-declaration visible rather than invisible. And because CDRs are retained by operators under legal obligation, the regulator can reopen the past: re-run any historical period, rebuild any KPI, and reconstruct evidence years after the fact — something a gateway, which only captures traffic in real time, can never do.

Still have questions?

Talk to our team directly.

Contact Us