January 31, 2026

·

RunAnywhere React Native SDK Part 1: Chat with LLMs On-Device

RunAnywhere React Native SDK Part 1: Chat with LLMs On-Device
DEVELOPERS

Run LLMs Entirely On-Device with React Native


This is Part 1 of our RunAnywhere React Native SDK tutorial series:

  1. Chat with LLMs (this post) — Project setup and streaming text generation
  2. Speech-to-Text — Real-time transcription with Whisper
  3. Text-to-Speech — Natural voice synthesis with Piper
  4. Voice Pipeline — Full voice assistant with VAD

React Native lets you build cross-platform apps with JavaScript and TypeScript. Now, with RunAnywhere, you can add powerful on-device AI capabilities—LLM chat, speech recognition, voice synthesis—all running locally with no cloud dependency.

In this tutorial, we'll set up the SDK and build a streaming chat interface that works offline on both iOS and Android.

Why On-Device AI?

AspectCloud AIOn-Device AI
PrivacyData sent to serversData stays on device
LatencyNetwork round-tripInstant local processing
OfflineRequires internetWorks anywhere
CostPer-request billingOne-time download

For apps handling sensitive data, on-device processing provides the privacy users expect.

Prerequisites

  • Node.js 18+
  • React Native CLI or Expo (bare workflow)
  • Xcode 15+ (for iOS builds)
  • Android Studio with SDK 24+, NDK, and CMake (for Android builds)
  • Physical ARM64 device required for Android (emulators won't work—see Android Setup)
  • ~250MB storage for the LLM model

Project Setup

1. Create a New React Native Project

bash
1npx react-native init LocalAIPlayground --template react-native-template-typescript
2cd LocalAIPlayground
VS Code with React Native project structure

2. Install the RunAnywhere SDK

bash
1npm install @runanywhere/core@0.17.4 @runanywhere/llamacpp@0.17.4 @runanywhere/onnx@0.17.4

3. iOS Configuration

Update your ios/Podfile:

ruby
1platform :ios, '15.1'
2
3# Add to the bottom of the file
4post_install do |installer|
5 installer.pods_project.targets.each do |target|
6 target.build_configurations.each do |config|
7 config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '15.1'
8 end
9 end
10end

Install pods:

bash
1cd ios && pod install && cd ..

Add microphone permission to ios/LocalAIPlayground/Info.plist:

xml
1<key>NSMicrophoneUsageDescription</key>
2<string>This app needs microphone access for voice AI features.</string>

4. Android Configuration

Update android/app/build.gradle:

groovy
1android {
2 defaultConfig {
3 minSdkVersion 24 // Android 7.0+
4 }
5}

Add permissions to android/app/src/main/AndroidManifest.xml:

xml
1<uses-permission android:name="android.permission.INTERNET" />
2<uses-permission android:name="android.permission.RECORD_AUDIO" />

Android Setup (Detailed)

Physical Device Required

Important: The RunAnywhere SDK includes native libraries compiled only for ARM64 (arm64-v8a). Android emulators (x86/x86_64) will NOT work.

If you see this error, you're likely running on an emulator:

dlopen failed: library "librunanywherecore.so" not found

Set JAVA_HOME

Use Android Studio's bundled JDK (JBR):

macOS:

bash
1export JAVA_HOME="/Applications/Android Studio.app/Contents/jbr/Contents/Home"

Windows (PowerShell):

powershell
1$env:JAVA_HOME = "C:\Program Files\Android\Android Studio\jbr"

Windows (CMD):

cmd
1set JAVA_HOME=C:\Program Files\Android\Android Studio\jbr

Configure gradle.properties

Copy the example file and configure:

bash
1cp android/gradle.properties.example android/gradle.properties

Ensure these settings in android/gradle.properties:

properties
1hermesEnabled=true
2runanywhere.testLocal=false
3runanywhere.rebuildCommons=false

Note: testLocal=false uses pre-built native libraries from the SDK. Only set to true if you have the SDK source locally.

Running on Physical Device

  1. Enable Developer Options: Settings → About Phone → Tap "Build Number" 7 times
  2. Enable USB Debugging: Settings → Developer Options → USB Debugging
  3. Connect device: adb devices (should show your device)
  4. Port forwarding: adb reverse tcp:8081 tcp:8081
  5. Start Metro: npm start
  6. Run app: npx react-native run-android

Troubleshooting

IssueSolution
"hermesEnabled" property not foundCopy ,[object Object], to ,[object Object]
"Unable to load script" on deviceRun ,[object Object]
Grey screen after launchRestart Metro bundler: ,[object Object]
Native library not foundEnsure you're on a physical ARM64 device, not emulator

SDK Initialization

The SDK requires a specific initialization order. Update your App.tsx:

typescript
1import React, { useEffect, useState } from 'react';
2import { SafeAreaView, Text, ActivityIndicator, StyleSheet } from 'react-native';
3import { RunAnywhere, SDKEnvironment } from '@runanywhere/core';
4import { LlamaCPP } from '@runanywhere/llamacpp';
5import { ONNX } from '@runanywhere/onnx';
6import { ChatScreen } from './src/screens/ChatScreen';
7
8export default function App() {
9 const [isInitialized, setIsInitialized] = useState(false);
10 const [error, setError] = useState<string | null>(null);
11
12 useEffect(() => {
13 initializeSDK();
14 }, []);
15
16 async function initializeSDK() {
17 try {
18 // Step 1: Initialize core SDK
19 await RunAnywhere.initialize({
20 environment: SDKEnvironment.Development,
21 });
22 console.log('SDK: RunAnywhere initialized');
23
24 // Step 2: Register backends BEFORE adding models
25 LlamaCPP.register();
26 console.log('SDK: LlamaCPP backend registered');
27
28 ONNX.register();
29 console.log('SDK: ONNX backend registered');
30
31 // Step 3: Register the LLM model
32 RunAnywhere.registerModel({
33 id: 'lfm2-350m-q4_k_m',
34 name: 'LiquidAI LFM2 350M',
35 url: 'https://huggingface.co/LiquidAI/LFM2-350M-GGUF/resolve/main/LFM2-350M-Q4_K_M.gguf',
36 framework: 'llamacpp',
37 memoryRequirement: 250_000_000,
38 });
39 console.log('SDK: LLM model registered');
40
41 setIsInitialized(true);
42 } catch (e) {
43 console.error('SDK initialization failed:', e);
44 setError(e instanceof Error ? e.message : 'Unknown error');
45 }
46 }
47
48 if (error) {
49 return (
50 <SafeAreaView style={styles.container}>
51 <Text style={styles.errorText}>Error: {error}</Text>
52 </SafeAreaView>
53 );
54 }
55
56 if (!isInitialized) {
57 return (
58 <SafeAreaView style={styles.container}>
59 <ActivityIndicator size="large" color="#007AFF" />
60 <Text style={styles.loadingText}>Initializing AI...</Text>
61 </SafeAreaView>
62 );
63 }
64
65 return <ChatScreen />;
66}
67
68const styles = StyleSheet.create({
69 container: {
70 flex: 1,
71 justifyContent: 'center',
72 alignItems: 'center',
73 backgroundColor: '#000',
74 },
75 loadingText: {
76 marginTop: 16,
77 color: '#fff',
78 fontSize: 16,
79 },
80 errorText: {
81 color: '#ff4444',
82 fontSize: 16,
83 padding: 20,
84 textAlign: 'center',
85 },
86});

Note: In development mode, no API key is needed—all inference runs on-device. For production with RunAnywhere Cloud routing, provide your API key in the initialize() call.

App initializing AI on launch

Architecture Overview

text
1┌─────────────────────────────────────────────────────┐
2│ RunAnywhere Core │
3│ (Unified API, Model Management) │
4├───────────────────────┬─────────────────────────────┤
5│ LlamaCPP Backend │ ONNX Backend │
6│ ───────────────── │ ───────────────── │
7│ • Text Generation │ • Speech-to-Text │
8│ • Chat Completion │ • Text-to-Speech │
9│ • Streaming │ • Voice Activity (VAD) │
10└───────────────────────┴─────────────────────────────┘

Downloading & Loading Models

Create src/hooks/useModelLoader.ts:

typescript
1import { useState, useCallback } from 'react'
2import { RunAnywhere } from '@runanywhere/core'
3
4export function useModelLoader() {
5 const [downloadProgress, setDownloadProgress] = useState(0)
6 const [isDownloading, setIsDownloading] = useState(false)
7 const [isLoaded, setIsLoaded] = useState(false)
8 const [error, setError] = useState<string | null>(null)
9
10 const downloadAndLoad = useCallback(async (modelId: string) => {
11 setIsDownloading(true)
12 setError(null)
13
14 try {
15 // Check if already downloaded
16 const isDownloaded = await RunAnywhere.isModelDownloaded(modelId)
17
18 if (!isDownloaded) {
19 // Download with progress tracking
20 await RunAnywhere.downloadModel(modelId, (progress) => {
21 setDownloadProgress(progress.progress)
22 console.log(`Download: ${(progress.progress * 100).toFixed(1)}%`)
23 })
24 }
25
26 // Load into memory
27 await RunAnywhere.loadModel(modelId)
28 setIsLoaded(true)
29 console.log('Model loaded successfully')
30 } catch (e) {
31 setError(e instanceof Error ? e.message : 'Unknown error')
32 console.error('Model error:', e)
33 } finally {
34 setIsDownloading(false)
35 }
36 }, [])
37
38 return {
39 downloadProgress,
40 isDownloading,
41 isLoaded,
42 error,
43 downloadAndLoad,
44 }
45}

Note: Only one LLM model can be loaded at a time. Loading a different model automatically unloads the current one.

Streaming Text Generation

Create src/screens/ChatScreen.tsx:

typescript
1import React, { useState, useEffect, useRef } from 'react';
2import {
3 View,
4 Text,
5 TextInput,
6 TouchableOpacity,
7 FlatList,
8 StyleSheet,
9 KeyboardAvoidingView,
10 Platform,
11} from 'react-native';
12import { RunAnywhere } from '@runanywhere/core';
13import { useModelLoader } from '../hooks/useModelLoader';
14
15interface Message {
16 id: string;
17 role: 'user' | 'assistant';
18 content: string;
19}
20
21export function ChatScreen() {
22 const [messages, setMessages] = useState<Message[]>([]);
23 const [inputText, setInputText] = useState('');
24 const [isGenerating, setIsGenerating] = useState(false);
25 const flatListRef = useRef<FlatList>(null);
26
27 const { isLoaded, isDownloading, downloadProgress, downloadAndLoad } = useModelLoader();
28
29 useEffect(() => {
30 downloadAndLoad('lfm2-350m-q4_k_m');
31 }, [downloadAndLoad]);
32
33 async function sendMessage() {
34 const text = inputText.trim();
35 if (!text || isGenerating || !isLoaded) return;
36
37 setInputText('');
38
39 // Add user message
40 const userMessage: Message = {
41 id: Date.now().toString(),
42 role: 'user',
43 content: text,
44 };
45
46 // Add placeholder for assistant
47 const assistantMessage: Message = {
48 id: (Date.now() + 1).toString(),
49 role: 'assistant',
50 content: '',
51 };
52
53 setMessages(prev => [...prev, userMessage, assistantMessage]);
54 setIsGenerating(true);
55
56 try {
57 const streamResult = await RunAnywhere.generateStream(text, {
58 maxTokens: 256,
59 temperature: 0.7,
60 });
61
62 let fullResponse = '';
63 for await (const token of streamResult.stream) {
64 fullResponse += token;
65 setMessages(prev => {
66 const updated = [...prev];
67 updated[updated.length - 1] = {
68 ...updated[updated.length - 1],
69 content: fullResponse,
70 };
71 return updated;
72 });
73 }
74
75 // Get metrics
76 const result = await streamResult.result;
77 console.log(`Speed: ${result.tokensPerSecond.toFixed(1)} tok/s`);
78
79 } catch (e) {
80 console.error('Generation error:', e);
81 setMessages(prev => {
82 const updated = [...prev];
83 updated[updated.length - 1] = {
84 ...updated[updated.length - 1],
85 content: `Error: ${e instanceof Error ? e.message : 'Unknown error'}`,
86 };
87 return updated;
88 });
89 } finally {
90 setIsGenerating(false);
91 }
92 }
93
94 function renderMessage({ item }: { item: Message }) {
95 const isUser = item.role === 'user';
96 return (
97 <View style={[styles.messageBubble, isUser ? styles.userBubble : styles.assistantBubble]}>
98 <Text style={styles.messageText}>
99 {item.content || '...'}
100 </Text>
101 </View>
102 );
103 }
104
105 if (isDownloading) {
106 return (
107 <View style={styles.loadingContainer}>
108 <Text style={styles.loadingText}>
109 Downloading model... {(downloadProgress * 100).toFixed(0)}%
110 </Text>
111 <View style={styles.progressBar}>
112 <View style={[styles.progressFill, { width: `${downloadProgress * 100}%` }]} />
113 </View>
114 </View>
115 );
116 }
117
118 return (
119 <KeyboardAvoidingView
120 style={styles.container}
121 behavior={Platform.OS === 'ios' ? 'padding' : undefined}
122 >
123 <FlatList
124 ref={flatListRef}
125 data={messages}
126 renderItem={renderMessage}
127 keyExtractor={item => item.id}
128 contentContainerStyle={styles.messageList}
129 onContentSizeChange={() => flatListRef.current?.scrollToEnd()}
130 />
131
132 <View style={styles.inputContainer}>
133 <TextInput
134 style={styles.input}
135 value={inputText}
136 onChangeText={setInputText}
137 placeholder="Type a message..."
138 placeholderTextColor="#666"
139 editable={isLoaded && !isGenerating}
140 onSubmitEditing={sendMessage}
141 />
142 <TouchableOpacity
143 style={[styles.sendButton, (!isLoaded || isGenerating) && styles.disabled]}
144 onPress={sendMessage}
145 disabled={!isLoaded || isGenerating}
146 >
147 <Text style={styles.sendButtonText}>
148 {isGenerating ? '...' : 'Send'}
149 </Text>
150 </TouchableOpacity>
151 </View>
152 </KeyboardAvoidingView>
153 );
154}
155
156const styles = StyleSheet.create({
157 container: {
158 flex: 1,
159 backgroundColor: '#000',
160 },
161 loadingContainer: {
162 flex: 1,
163 justifyContent: 'center',
164 alignItems: 'center',
165 backgroundColor: '#000',
166 padding: 40,
167 },
168 loadingText: {
169 color: '#fff',
170 fontSize: 16,
171 marginBottom: 16,
172 },
173 progressBar: {
174 width: '100%',
175 height: 8,
176 backgroundColor: '#333',
177 borderRadius: 4,
178 overflow: 'hidden',
179 },
180 progressFill: {
181 height: '100%',
182 backgroundColor: '#007AFF',
183 },
184 messageList: {
185 padding: 16,
186 paddingBottom: 100,
187 },
188 messageBubble: {
189 maxWidth: '80%',
190 padding: 12,
191 borderRadius: 16,
192 marginVertical: 4,
193 },
194 userBubble: {
195 backgroundColor: '#007AFF',
196 alignSelf: 'flex-end',
197 },
198 assistantBubble: {
199 backgroundColor: '#333',
200 alignSelf: 'flex-start',
201 },
202 messageText: {
203 color: '#fff',
204 fontSize: 16,
205 },
206 inputContainer: {
207 flexDirection: 'row',
208 padding: 16,
209 backgroundColor: '#111',
210 borderTopWidth: 1,
211 borderTopColor: '#333',
212 },
213 input: {
214 flex: 1,
215 backgroundColor: '#222',
216 borderRadius: 20,
217 paddingHorizontal: 16,
218 paddingVertical: 10,
219 color: '#fff',
220 fontSize: 16,
221 },
222 sendButton: {
223 marginLeft: 12,
224 backgroundColor: '#007AFF',
225 borderRadius: 20,
226 paddingHorizontal: 20,
227 justifyContent: 'center',
228 },
229 sendButtonText: {
230 color: '#fff',
231 fontSize: 16,
232 fontWeight: '600',
233 },
234 disabled: {
235 opacity: 0.5,
236 },
237});
Chat interface streaming on device

Models Reference

Model IDSizeNotes
lfm2-350m-q4_k_m~250MBLiquidAI LFM2, fast, efficient

What's Next

In Part 2, we'll add speech-to-text capabilities using Whisper, including native audio recording for both platforms.


Resources


Questions? Open an issue on GitHub or reach out on Twitter/X.

RunAnywhere Logo

RunAnywhere

Connect with developers, share ideas, get support, and stay updated on the latest features. Our Discord community is the heart of everything we build.

Company

Copyright © 2025 RunAnywhere, Inc.