As you probably know, RxJava is good for solving two problems: processing event streams and working with asynchronous methods. In a previous article (Shake Detector for Android with RxJava), I showed you how to build an operator chain to process an event stream from a sensor. In this article, I want to demonstrate how RxJava can be applied to work with an existing asynchronous API. I’ve chosen Camera2 API as an example of this type of API.
I will show an example of Camera2 API usage, which at present has been poorly documented and studied by the community. We’ll be using RXJava2 to tame this API. The second version of this popular library came out relatively recently, and not many examples are available.
So, who is this article for? I’m assuming that the reader is a smart, experienced and inquisitive Android developer. Basic knowledge of reactive programming is highly desirable (you’ll find a good introduction from Jake Wharton here) as well as the ability to understand Marble Diagrams. This article will be useful for those of you desiring to get into using a reactive approach, as well as those who want to use Camera2 API in their projects. I warn you in advance that there will be a lot of code!
For the current version, full instructions and documentation check here.
This is a useful library when using RxJava on Android. It is generally used for AndroidSchedulers.
Some time ago I was involved in a Code Review of a feature written using Camera1 API and was unpleasantly surprised with its unavoidable API concurrency issues. It’s clear that Google have also recognised the problem and deprecated the first version of the API. As an alternative, they suggest using Camera2 API. The second version is available for Android Lollipop and newer versions.
Google has worked hard on mistakes relating to organising threads. All operations are carried out asynchronously, with notifications coming via callbacks. In particular, you can choose the thread in which the callback method sent to the corresponding Handler will be called. As always, working with subsequent asynchronous calls used in this API can quickly descend into Callback Hell.
In this method, we transfer the chosen camera’s ID to the callback so that it can receive the asynchronous result; and Handler if we want the callback method to be called in the thread of this Handler.
This is where we come across the first asynchronous method. It’s understandable, as the initialisation of the device is a long and costly process.
In the reactive world, these methods will correspond to events. Let’s create an Observable which will emit an event when the camera API calls callbacks onOpened, onClosed and onDisconnected. To distinguish these events, we create enum:
For the reactive stream (from here on in, I’ll call the reactive stream the sequence of reactive operators) to be able to do anything with the device, we’ll add a link to CameraDevice in the emitted event. The simplest method is to emit Pair<DeviceStateEvents, CameraDevice>. To create an Observable, we’ll use the create method (remember we’re using RxJava2, so now this isn’t a completely shameful action).
Here is the signature of the create method:
This means that we need to pass the object implementing ObservableOnSubscribe interface. This interface only contains one method:
which is called every time Observer subscribes to our Observable. Let’s look at what ObservableEmitter is:
It’s already looking useful. Using the methods setDisposable/setCancellable you can set an action which will be carried out when our Observable signs off. This is extremely useful if we open a resource, during the creation of an Observable, which we need to close. We could have created a Disposable to close the device on unsubscribe, but we want to react to the event onClosed, so we won’t do this.
The isDisposed method allows us to check whether anything else is subscribed to our Observable.
Take note that the interface named ObservableEmitter expands the Emitter interface:
These are the methods we need! We’ll call onNext each time Camera API calls the callbacks CameraDevice.StateCallback onOpened / onClosed / onDisconnected; and we’ll call onError when Camera API calls the onError callback.
So, let’s apply our knowledge. The method creating an Observable can look something like this (for reader simplification I’ve removed verifications for isDisposed(). For the full code with boring checks look at GitHub):
Great, we’ve just become a little more reactive!
As I’ve already said, all methods for Camera2 API accept Handler as one of the parameters. In transferring null, we’ll get callbacks called in the current thread. In our case, this is the thread in which subscribe was called, which is Main Thread.
Now that we have a CameraDevice, we can open CaptureSession. So, without further ado, let’s continue.
For this, we’ll use the method CameraDevice.createCaptureSession.
Here’s the signature:
Yes, it’s rich. However, we already know how to conquer the callbacks! We’ll create an Observable, which will emit an event when Camera API calls these methods. To distinguish them, we’ll create enum:
So that CameraCaptureSession is within the reactive stream, we’ll generate not just CaptureSessionStateEvent, but Pair<CaptureSessionStateEvents, CameraCaptureSession>. So, this is what a method creating such an Observable can look like (again, the verifications are removed to make it easier to read):
For a live picture from the camera to appear on the screen, we need to constantly receive new images from the device and send them for display. There’s a convenient method in the API for this CameraCaptureSession.setRepeatingRequest:
Again, we want to distinguish the events generated, so let’s create enum
Also, we see that a large amount of information is sent to the methods, which we want to include in the reactive stream, including CameraCaptureSession, CaptureRequest and CaptureResult. As simply using Pair won’t suit, we’ll create a POJO:
We’ll transfer creation of CameraCaptureSession.CaptureCallback into a separate method:
From all these messages, we’re interested in onCaptureCompleted/onCaptureFailed, and we’ll ignore the rest. If you need them for your project, it’s not hard to add them.
Now everything’s ready, so we can create an Observable:
In fact, this step is essentially the same as the previous one. However, we’re carrying this out not through repeating the request, but individually. For this, we’ll use the method: CameraCaptureSession.capture.
TextureView notifies the listener when Surface is ready for use.
This time, let’s create PublishSubject, which will generate the events when TextureView calls the listener methods:
In using PublishSubject, we avoid potential problems with multiple subscribe. We’ll set SurfaceTextureListener in onCreate just once and live peacefully ever afterwards. PublishSubject can be subscribed to as many times as is necessary, passing the events to all subscribers.
One specific flaw in using Camera2 API is that you cannot explicitly set the size of the image. The camera chooses one of the supported resolutions based on the size of the Surface sent to it. This means that the following trick is needed: we get the list of image sizes supported by the camera, choose the most suitable one and then set the buffer size according to this information.
If we want to save proportions, we need to set the TextureView’s aspect ratio. For this we’ll override the onMeasure method.
Writing to file
To save an image from Surface to the file we’ll use the ImageReader class.
A few words on choosing the size for ImageReader. Firstly, we need to choose this from the list of those supported by the camera. Secondly, the aspect ratio must match that chosen for preview.
To get notification from ImageReader concerning the image being ready, we’ll use the method:
Each time Camera API records an image to Surface, supplied by our ImageReader, it will induce this callback.
Let’s make this operation reactive: we’ll create an Observable, which will emit an event each time ImageReader is ready to supply an image:
Take note that we’re using the ObservableEmitter.setCancellable method to delete the listener when Observable is being unsubscribed.
Saving to the file is a long operation, so let’s make this reactive using the fromCallable method:
Now we can set up this sequence of actions: when a ready image appears in ImageReader, we’ll save it the Schedulers.io() thread, then switch to the UI thread and notify the UI that the file is ready:
So, now we’re basically ready! We can already create Observable for basic asynchronous actions, which are required for the application to work. Now for the most interesting moment - configuring the reactive streams.
As a warm-up let’s make the camera open after SurfaceTexture is ready for use:
The key operator here is flatMap
In our case, on receiving an event concerning SurfaceTexture being ready, the openCamera function will be executed and emit the events from the created Observable further into the reactive stream.
It’s also important to understand why we use the share operator at the end of the chain. This operator is equivalent to the publish().refCount() operator chain.
If you look at this marble diagram for a long time, you’ll notice that the result is very similar to using PublishSubject. Indeed, we’re solving a similar problem - if our Observable is subscribed to several times, we don’t want to open the camera again every time.
Let’s introduce another couple of Observables for convenience.
openCameraObservable will emit an event when the camera successfully opens, and closeCameraObservable will emit an event when the camera closes.
Let’s put in one more step: after the camera has successfully opened, we’ll open the session
In a similar fashion, let’s create another couple of Observables to signal that the session has been successfully opened or closed.
Finally, we can send a repeated request to display a preview:
Now it’s enough to run
and a live picture from the camera appears on the screen!
A little variation. If we inline all the intermediate Observables, we’ll get the following chain of operators.
This is enough to show the preview. Impressive, right?
Speaking frankly, this solution has issues with closing resources, and so we can’t actually take a photo yet. I’ve brought this up so that the full chain can be seen. All intermediate Observables are required for the creation of more complex scenarios of behaviour for the future.
For us to be able to sign off, we need to save the Disposable returned by the subscribe method. The easiest way is to use CompositeDisposable:
In the real code, I’ve added mCompositeDisposable.add(…subscribe()) everywhere, but I’ve left this out in the article to make it easier to read.
How to create CaptureRequest requests
Attentive readers have of course already noticed that we are using the createPreviewBuilder method, which we haven’t described yet. Let’s take a look at what’s inside:
Here we can use the request template for preview provided by the Camera2 API, add our Surface and tell it that we want Auto Focus, Auto Exposure and Auto White Balance (three A). To achieve this, all we need to do is set up several flags:
Now let’s outline the plan of action. First and foremost, we want to take a picture when preview has already started, which means everything is ready to go. For this, we’ll use the operator combineLatest:
However, this will generate events constantly on receiving fresh data from previewObservable, so let’s limit it to the first event:
We’ll wait while autofocus and autoexposure do their work:
Finally, let’s take a photo:
The full operator sequence is:
Let’s look at what’s inside captureStillPicture:
All of this here sounds familiar - we create a request, launch capture and wait for the result.
The request is constructed from the STILL_PICTURE template, and we add Surface for saving in the file as well as a few magic flags which tell the camera that this is the important request to save the image. Information is also given concerning the orientation of the image in the jpeg file.
Good applications always close resources, especially demanding ones, such as a camera device. Let’s close everything after the onPause event:
Here, we are successively closing the session and the device pending confirmation from the API.
We created an application in this article which can show a live preview picture and take photos. We have a fully working camera application. The only aspect we haven’t covered is waiting for autofocus to function, automatic exposure selection and choosing the file orientation. You’ll get the answers to these issues in part two of this article.
In RxJava, developers have been given a powerful tool for controlling asynchronous APIs. With competent use, you can avoid Callback Hell and get a clean code, that is easy to read and to maintain. Share your opinions in the comment section!