iOS

Register an account for free if you don’t have one.

Identify Music or TV with iOS SDK

This demo shows how to identify music ( songs ) or detect live TV channels by recorded sound with ACRCloud iOS SDK. Contact us if you have any question or special requirement about the SDK: support@acrcloud.com

Preparation

  • The newest ACRCloud iOS SDK which contains both ObjectC and Swift demo projects.
  • If you want to recognize music, you need a Audio Recognition project. ( See How to Recognize Music )
  • If you want to detect tv channels, you need a Live Channel Detection project. ( See How to Detect Live TV Channels )
  • Save the information of “host”, “access_key”, “access_secret” of your project.
  • Make sure you have Xcode installed.

Quick Trial

If you are familiar with iOS development.

  • Download the ACRCloud iOS SDK package and unzip it.
  • Open either ACRCloudDemo or ACRCloudDemo_Swift
  • Update accessKey, host and accessSecret in ViewController with the information of your project.
  • Run the demo project to test recognizing contents in the buckets of your project.

Step-by-Step Tutorial

Step 1

Download the ACRCloud iOS SDK package and unzip it.

 

Step 2

Open Xcode and create a new Single View iOS Application Project in Xcode. Click “Next” to choose the directory to place the folder.

51C5100D-B4CD-4CDC-89C9-DBA395A166BF

6610748E-E37A-4358-8423-B2FC5A16F0B6

 

07823BF2-95FF-4582-AF9B-5AAE76A3F5A0

Step 3

Copy libACRCloud_IOS_SDK.a and the two header files ACRCloudConfig.h and ACRCloudRecognition.h to the directory of your project.

Picture1

Add the three files above to your project by using the “Add Files to…“ function of the “File” menu.

Picture1

Picture1

Step 4

Open the configuration page of your project by selecting it and click “+” button in the “Linked Frameworks and Libraries” section to search and add these system frameworks and libraries below:

Security.framework
libc++.tbd
AVFoundation.framework
AudioToolbox.framework

Picture1

Picture1

Step 5

Just replace your default empty Main.storyboard, ViewController.h and ViewController.m with the corresponding files within the “ACRCloudDemo” we provided to get a quick overview of how our SDK’s works.

Step 6

Update accessKey, host and accessSecret in ViewController with the information of your project.

Then you can run the demo project to test recognizing contents in the buckets of your project.

Picture1

Recognition Mode

config.recMode is depending on the type of your project,
rec_mode_remote is for Audio & Video Recognition, Live Channel Detection, Hybrid Recognition, it’s online recognition
rec_mode_local  is for Offline Recognition, please put the offline database ( such as “acrcloud_local_db” ) into your app project’s workspace.

rec_mode_both support both online and offline recognition. it will search the local db first, then search the cloud db.

rec_mode_advance_remote : is as almost as same as rec_mode_remote, except that you can get the fingerprint data when the network does not work. You should set resultFpBlock to the ACRCloudConfig

One-time  Recording Session Recognition and Loop Recognition

One-time Recognition:

Click the “Start” Button and the App Demo will begin to record and recognize. When it detects a result, the app demo will stop and display the result. In the progress of recording and detecting, you can stop (click the “Stop” Button”) this recognition at any time.

Loop Recognition:

Picture1

If you remove these two lines of code above, the App Demo will not stop recording and detecting until you click the “Stop” Button. Once the app detect a result or one loop is over, then you can get result from “handleResult” Block, and then the app will continue the detection process.

Open Prerecording Recognition

Open prerecording will make the recognition much more faster.

If you want open this feature, please call  -(void)startPreRecord:(NSInteger)recordTime
The parameter recordTime is the prerecording time. The recommend value is 3000-4000

File/PCM/Fingerprint Recognition

if you recongize audio data, the audio format shoud be  RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 8000 Hz,  you also can use the resample function to convert the audio data to what we need.

-(NSString*)recognize:(char*)buffer len:(int)len;
-(NSString*)recognize:(NSData*)pcm_data;
-(NSString*)recognize_fp:(NSData*)fingerprint;

Resample Function

Convert your audio format to RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 8000 Hz

+(NSData*) resample:(char*)pcm
                             len:(unsigned)len
               sampleRate:(unsigned)sampleRate
                   nChannel:(short)nChannel;

+(NSData*) resample_bit32:(char*)pcm
                                      len:(unsigned)bytes
                        sampleRate:(unsigned)sampleRate
                            nChannel:(short)nChannel
                                 isFloat:(bool)isFloat;

What’s Next

See  iOS SDK Reference to start integration.