SYSTEM_LOG // 2026-02-15

The Naked Model Problem: How I Extracted a $50k AI Model in 30 Seconds

Most Android AI apps ship their TFLite models in plaintext. Learn how APK extraction works and why ProGuard won't save your intellectual property.

The Naked Model Problem

Founders and engineers are spending months (and thousands of Euros) training proprietary AI models for mobile apps. They optimize weights, prune architectures, and curate massive datasets.

Then, they ship the model "naked."

Last week, I performed a security audit on several top-tier Android apps in the Nature and Medical niches. What I found was staggering: over 90% of them are effectively gifting their core IP to the public.

The 30-Second Heist

If you think your model is safe because your app is compiled, you are mistaken. An APK is just a ZIP file. Here is exactly how I "stole" a proprietary TFLite model:

  1. Download the APK: Using a standard downloader.
  2. Handle the Splits: Modern apps use Split APKs. I looked for config.arm64_v8a.apk.
  3. Rename & Unzip: Change .apk to .zip and hit extract.
  4. The Prize: Navigate to /assets/. There it was: classifier.tflite.

I opened it in Netron, and within seconds, I had the full neural network architecture, input tensors, and every single trained weight.

Why Your Current Defense is Failing

1. ProGuard and R8

ProGuard is great for obfuscating your Java/Kotlin code. It is useless for your assets. ProGuard does not touch the /assets or /res/raw folders. Your model remains in plaintext.

2. Google ML Kit

Many professional apps rely on Google's ML Kit. While convenient, the default implementation often stores downloaded models in a predictable path in the device's internal storage. On a rooted device, these are sitting ducks.

3. Native Library Extraction

In many manifests, extractNativeLibs is set to true. This makes it trivial for a researcher to see the JNI calls that interact with your model.

How TensorSeal Fixes This

We built TensorSeal to move the goalposts. Instead of leaving models in the assets folder, we:

  • Encrypt the model weights using AES-128 GCM.
  • Move the decryption logic into a custom C++ JNI layer.
  • Ensure the model only exists in plaintext in-RAM during inference.

If a hacker unzips your APK, they find a .lock file that is cryptographically useless to them.


Don't let your startup be an involuntary open-source project. Initialize your defense at TensorSeal.com

END OF TRANSMISSION // TENSORSEAL_SEC_OPS