Use the Mobile Media API to add video functionality to your Java mobile applications Many mobile phones today support the Mobile Media API (MMAPI) under JSR-135. Using MMAPI, you can easily develop robust and useful Java mobile video applications. In this two-part article, JavaWorld contributor Srijeeb Roy shows you how to develop and test a mobile video application using JSR-135. In Part 1 you will learn how to capture and record video content using JME (Java Micro Edition). In Part 2 you will learn how to store your video content on a server using the Generic Connection Framework (GCF) over the HTTP protocol.Multimedia processing using MMAPI can be considered in two parts:Data delivery protocol handlingData content handlingData delivery protocol handling involves reading data from a source (from a capture device or a file, for instance) into a media-processing system. Data content handling involves processing the media data (that is, parsing or decoding it) and rendering the media to an output device — to a video display, for example. MMAPI provides two high-level classes to help you with these tasks. The javax.microedition.media.protocol.DataSource class deals with data delivery protocol handling, and the javax.microedition.media.Player interface deals with data content handling. To help the developer, a Manager class (javax.microedition.media.Manager) creates Players from DataSources, locators, and InputStreams.Before you continue to the hands-on portion of this article, you should download and install a JDK version 1.5 or later (1.6 or later if you’re using Windows Vista) and the Sun Java Wireless Toolkit. I used Sun Java Wireless Toolkit 2.5.1 for CLDC Early Access while writing this article.Once you’ve installed these packages, open the Wireless Toolkit. You will see a screen that looks like the one below. Figure 1. Sun Java Wireless Toolkit first screenNow click the New Project icon or choose File -> New Project. In the New Project pop-up menu, enter MobileVideoApp as the a project name and com.srijeeb.jme.MobileVideoApp as the MIDlet class name, as shown in Figure 2. Alternately, you can unzip the source code supplied along with this article in the /apps/MobileVideoApp directory and open the project using the Open Project option. (Throughout this tutorial, represents the directory where you installed the Wireless Toolkit.)Once you click Create Project, a window will pop up where you can edit the project settings. You need to change the settings on the API Selection page to match our target platform. Change the Target Platform drop-down option to JTWI. Make sure that the profile is set to MIDP 2.0 and the configuration is checked as CLDC 1.0. Also, make sure the Mobile Media API (JSR 135) checkbox is checked in the Optional area. The screen will look like Figure 3. Click OK to move on.Figure 3. API Selection screenDetermining the MMAPI-related features supported by a mobile deviceBefore going further, it is worth mentioning that not all Java-enabled mobile devices support JSR-135, and a few that do support it don’t support video recording. To help you discover the features of a mobile device related to media processing, MMAPI has provided few system properties that you can query using the System.getProperty() method. Let’s begin by trying to determine the features that the mobile device we’ll be executing our code on supports. For this example, I will compare results from two mobile handsets: the Nokia 3230 and the Nokia 6600. The specifications for both mobile phones claim that they support JSR-135. We’ll start by writing the MIDlet and a Form that will display the system properties related to MMA capabilities, as shown in Listing 1. Use your favorite editor (Notepad will also suffice) to edit the code. Remember to save files under /apps/MobileVideoApp/src/com/srijeeb/jme.Listing 1. Our main MIDlet class: MobileVideoApp package com.srijeeb.jme; import javax.microedition.lcdui.*; import javax.microedition.media.*; import javax.microedition.midlet.MIDlet; public class MobileVideoApp extends MIDlet { private Display display; private PropertyForm form; public MobileVideoApp() { form = new PropertyForm("Mobile Video App",this); } public void startApp() { display = Display.getDisplay(this); display.setCurrent(form); } public void pauseApp() { } public void destroyApp(boolean unconditional) { } public Display getDisplay() { return display; } } I won’t be explaining such methods as startApp(), pauseApp(), or destroyApp(), because these are generic for any MIDlet application, and they require no special attention for writing our example mobile media application. PropertyForm is a class that we have extended from the javax.microedition.lcdui.Form class. Listing 2 shows PropertyForm‘s important sections. Listing 2. PropertyForm code for displaying MMA-specific properties package com.srijeeb.jme; import javax.microedition.lcdui.*; import javax.microedition.media.*; public class PropertyForm extends Form implements CommandListener { private final static Command CMD_EXIT = new Command("Exit", Command.EXIT, 1); private MobileVideoApp parentMidlet = null; protected PropertyForm(String in, MobileVideoApp parentMidlet_) { super(in); this.parentMidlet = parentMidlet_; initComponents(); } public void initComponents() { append(JMEUtility.getImage("/images/banner.png")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("version", "microedition.media.version")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("Audio Capture", "supports.audio.capture")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("Video Capture", "supports.video.capture")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("Recording", "supports.recording")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("Audio Enc", "audio.encodings")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("Video Enc", "video.encodings")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("Video Snp Enc", "video.snapshot.encodings")); append(JMEUtility.getImage("/images/separator.png")); append(getStringItem("Stream Cont", "streamable.contents")); append(JMEUtility.getImage("/images/separator.png")); append(JMEUtility.getImage("/images/separator.png")); append(getSupportedProtocols()); append(JMEUtility.getImage("/images/separator.png")); append(getSupportedContentTypeForHttp()); addCommand(CMD_EXIT); setCommandListener(this); } private StringItem getStringItem(String name, String propertyName) { String value = System.getProperty(propertyName); return new StringItem("[" + name + "]", value); } private StringItem getSupportedProtocols() { return new StringItem("[Protocols]", concatArray(Manager.getSupportedProtocols(null)) ); } private StringItem getSupportedContentTypeForHttp() { return new StringItem("[Content http]", concatArray(Manager.getSupportedContentTypes("http")) ); } public void commandAction(Command c, Displayable d) { if (c == CMD_EXIT) { parentMidlet.destroyApp(true); parentMidlet.notifyDestroyed(); } } public String concatArray(String[] list) { String ret = ""; if ( list != null && list.length > 0 ) { for ( int i=0; i < list.length; i++ ) { ret += list[i]; if ( i < (list.length-1)) { ret += "|"; } } } return ret; } } You will have noticed the use of the JMEUtility class in Listing 2. This is a small class that I have written to perform some utility tasks — loading images, showing error messages and so on. It’s not that important in the context of our article; you can find out more by examining JMEUtility.java, which is part of the source code supplied with this article. For the time being, assume that once we call JMEUtility.getImage(String), it loads the image mentioned in the parameter and caches it. The next time the same method call occurs, it returns the same image from the cache. (The reasons for caching the image have to do with design patterns and best practices for mobile application development, and are outside the scope of this article.)Our main intention in Listing 2 is to retrieve the MMA-related properties from the device and display them in a form. Effectively, if we call System.getProperty(propertyName), the mobile device should return the value for the particular property passed in the parameter. For example, if we call System.getProperty("supports.recording"), it will return a value of true for the Nokia 3230.The important property names related to MMAPI are as follows: microedition.media.version: The version of the MMAPI specification that is implemented.supports.audio.capture: Is audio capture supported? The string returned is either true or false.supports.video.capture: Is video capture supported? The string returned is either true or false.supports.recording: Is recording supported? The string returned is either true or false.audio.encodings: The string returned specifies the supported audio-capture formats.video.encodings: The string returned specifies the supported video-capture formats.video.snapshot.encodings: The string returned specifies the video snapshot formats for the getSnapshot() method in VideoControl.streamable.contents: The string returned specifies the supported streamable content types.You may encounter some confusion when trying to reconcile some of the property values mentioned above with what you observe on a real device. For example: the Nokia 3230 and the Nokia 6600 both return true for both the supports.video.capture and the supports.recording properties. So it seems that both devices will support video recording. But there is a catch. If supports.recording returns true, you can record media using at least one player type — at least one, but not necessarily all. The Nokia 6600 supports the recording of audio, but not the recording of video. But the Nokia 3230 also supports video recording.You might also have noticed two other method calls in Listing 2: getSupportedProtocols() and getSupportedContentTypeForHttp(). Let’s take a look at these in more detail.Inside the getSupportedProtocol() method, we have queried the Manager class (javax.microedition.media.Manager) to return the protocols it can handle to retrieve media. A Manager.getSupportedProtocols(null) call will return all the supported protocols by the manager. The actual method call looks like this: public static java.lang.String[] getSupportedProtocols(java.lang.String <i>content_type</i>) Here, if the given content type is video/mpeg, then the supported protocols that can be used to play back MPEG video will be returned. If null is passed in as the content type, all the supported protocols for this implementation will be returned.Inside the getSupportedContentTypeForHttp() method, we have queried the Manager class to return the supported content types it can handle for the HTTP protocol. The Manager.getSupportedContentTypes("http") call will return all the supported content types that can be delivered over the HTTP protocol to the player created by this Manager. The actual method call looks like this:public static java.lang.String[] getSupportedContentTypes(java.lang.String <i>protocol</i>) For example, if the given protocol is http, then the supported content types that can be played back with the HTTP protocol will be returned. Now it’s time to compile and test our code snippet using the Wireless Toolkit. But before doing so, if you are creating and coding the example from scratch (that is, if you have not downloaded the source supplied with this article), then you need to copy two images (banner.png and separator.png) from the supplied source and put them into the /apps/MobileVideoApp/res/images directory. You also need to copy JMEUtility.java from the supplied source and put it in the /apps/MobileVideoApp/src/com/srijeeb/jme directory. Once you’ve done this, click the Build button in the Wireless Toolkit.After the build is successful, click the Run icon. This icon will bring up the mobile phone emulator. Once you’ve clicked the Launch icon in the emulator, the MIDlet will start and the PropertyForm will be displayed, as shown in Figure 4.When the form is displayed in the emulator, you can verify the properties that the toolkit emulator supports. Once you see all the property values, close the emulator. You can now package the application and deploy it on a real mobile device. Or you can wait till the end of the article and just deploy the final application. If you want to deploy the application, return to the Wireless Toolkit and choose Project > Package > Create Package, as shown in Figure 5.The Create Package action will create MobileVideoApp.jar in the /apps/MobileVideoApp/bin directory. In the same directory, you will also find MobileVideoApp.jad; this is the application descriptor file. Now you can transfer the JAR and JAD files to your mobile device.Capturing video on a mobile deviceLet’s see how to capture a video on a mobile device. Keep in mind that capturing does not mean recording. Rather, it just refers to using the camera control and displaying the video content on the device’s screen. To add video-capture functionality to our application, we will write a new Form, called VideoRecordingForm. Despite its name, for the time being we will just use the new form to capture the video that is viewed through the camera of the mobile device; later, we will enhance it to record the video. To enhance our application now, we need to enhance the PropertyForm class. We need to create a new Form named OptionForm, which will ultimately display various options allowing the user to record video, play the recorded video and so on. We will also write a VideoCanvas class that will be used to display the video.First, let’s start to enhance PropertyForm.java. We will add another command in the PropertyForm such that it displays the OptionForm. Just after the point where we have declared the CMD_EXIT earlier in our PropertyForm, we will add another command named CMD_OK. And we will also enhance the commandAction() method to handle the code when the OK option is chosen. All this is shown in Listing 3.Listing 3. Modified PropertyForm code to open OptionForm public class PropertyForm extends Form implements CommandListener { private final static Command CMD_EXIT = new Command("Exit", Command.EXIT, 1); private final static Command CMD_OK = new Command("Ok", Command.SCREEN, 1); ... ... public void commandAction(Command c, Displayable d) { if (c == CMD_EXIT) { parentMidlet.destroyApp(true); parentMidlet.notifyDestroyed(); } else if (c == CMD_OK) { OptionForm optionForm = new OptionForm("",parentMidlet,this); parentMidlet.getDisplay().setCurrent(optionForm); } } } Now we can write OptionForm.java, shown in Listing 4. Listing 4. OptionForm, displaying options to the user package com.srijeeb.jme; import javax.microedition.lcdui.*; import javax.microedition.midlet.MIDlet; public class OptionForm extends Form implements CommandListener { private final static Command CMD_EXIT = new Command("Exit", Command.EXIT, 1); private final static Command CMD_OK = new Command("Ok", Command.SCREEN, 1); private final static Command CMD_BACK = new Command("Back", Command.SCREEN, 1); private MobileVideoApp parentMidlet = null; private Form parentForm = null; protected OptionForm(String in, MobileVideoApp parentMidlet_, Form parentForm_) { super(in); this.parentMidlet = parentMidlet_; this.parentForm = parentForm_; initComponents(); } public void initComponents() { append(JMEUtility.getImage("/images/banner.png")); append(getOptions()); addCommand(CMD_EXIT); addCommand(CMD_OK); addCommand(CMD_BACK); setCommandListener(this); } Image[] imageArray = {JMEUtility.getImage("/images/Icon.png"), JMEUtility.getImage("/images/Icon.png"), JMEUtility.getImage("/images/Icon.png")}; private ChoiceGroup options = null; private ChoiceGroup getOptions() { if ( options == null ) options = new ChoiceGroup("", Choice.EXCLUSIVE, new String[]{"Open Svr Video", "Open Video by Id", "Record Video"}, imageArray); return options; } public void commandAction(Command c, Displayable d) { if (c == CMD_EXIT) { parentMidlet.destroyApp(true); parentMidlet.notifyDestroyed(); } else if (c == CMD_OK) { if ( options.isSelected(0)) { // we will implement this part later } else if ( options.isSelected(1)) { // we will implement this part later } else if ( options.isSelected(2)) { VideoRecordingForm aVideoRecordingForm = new VideoRecordingForm("",parentMidlet,this); parentMidlet.getDisplay().setCurrent(aVideoRecordingForm); } } else if (c == CMD_BACK){ parentMidlet.getDisplay().setCurrent(parentForm); } } } In OptionForm.java, we are just displaying different options available; we will learn how to actually implement these options later in this article. At the moment, we are trying to implement the third option, Record Video. Later, we will enhance this class to play the recorded video back.Look at the code inside the commandAction() method. Here, we instantiate the VideoRecordingForm object and set that as the current display (that is, the display where the video-capture screen will appear) when a user selects the third option.Now consider VideoRecordingForm.java, shown in Listing 5. As mentioned earlier, for the time being, we will just use this form to capture the video; later, we will enhance it to actually record it. Listing 5. Initial version of VideoRecordingForm, which captures only video package com.srijeeb.jme; import javax.microedition.lcdui.*; import javax.microedition.media.*; import javax.microedition.media.control.*; import javax.microedition.midlet.MIDlet; import java.io.*; public class VideoRecordingForm extends Form implements CommandListener { private final static Command CMD_EXIT = new Command("Exit", Command.EXIT, 1); private final static Command CMD_BACK = new Command("Back", Command.SCREEN, 1); private Player player; private VideoControl videoControl; private VideoCanvas aVideoCanvas; String contentType = null; private MobileVideoApp parentMidlet = null; private VideoRecordingForm self = null; private Form parentForm = null; protected VideoRecordingForm(String in, MobileVideoApp parentMidlet_, Form parentForm_) { super(in); this.parentMidlet = parentMidlet_; this.parentForm = parentForm_; self = this; initComponents(); } public void initComponents() { addCommand(CMD_EXIT); addCommand(CMD_BACK); setCommandListener(this); (new CameraThread()).start(); } public void commandAction(Command c, Displayable d) { if (c == CMD_EXIT) { parentMidlet.destroyApp(true); parentMidlet.notifyDestroyed(); } else if (c == CMD_BACK) { releaseResources(); parentMidlet.getDisplay().setCurrent(parentForm); } } private void showCamera() { try { releaseResources(); player = Manager.createPlayer("capture://video"); player.realize(); videoControl = (VideoControl)player.getControl("VideoControl"); aVideoCanvas = new VideoCanvas(); aVideoCanvas.initControls(videoControl, player); aVideoCanvas.addCommand(CMD_BACK); aVideoCanvas.addCommand(CMD_EXIT); aVideoCanvas.setCommandListener(this); parentMidlet.getDisplay().setCurrent(aVideoCanvas); player.start(); contentType = player.getContentType(); }catch(Exception e) { e.printStackTrace(); JMEUtility.showErrorAlert("ERROR", e.getMessage(), 4000, this, parentMidlet); } } private void releaseResources() { if ( player != null ) { try { player.stop(); player.close(); }catch(Exception e){} } } class CameraThread extends Thread { public void run() { showCamera(); } } } The most important part of Listing 5 is the showCamera() method. In the very first line, we release any resources used by our player by calling the releaseResources() method. Inside releaseResources(), we call stop() and close() on the player object. Then we create a Player (javax.microedition.media.Player) object from the Manager class, like so:player = Manager.createPlayer("capture://video"); Next, we call the realize() method on player:player.realize(); Briefly, Player has its own life cycle, which contains five states: UNREALIZED, REALIZED, PREFETCHED, STARTED and CLOSED. When the Player is first constructed, it is in the UNREALIZED state. When it transitions from UNREALIZED to REALIZED, the Player performs the communication necessary to locate all of the resources it needs to function (such as communicating with a server or a file system). The realize() method allows an application to initiate this potentially time-consuming process at an appropriate time. Typically, a Player moves from the UNREALIZED state to the REALIZED state, then to the PREFETCHED state and finally to the STARTED state. To record the video, we have to create a javax.microedition.media.control.VideoControl object from the player.VideoControl videoControl = (VideoControl)player.getControl("VideoControl"); VideoControl controls the display of a video. Note that VideoControl is not the control that records video; it just uses the camera of the mobile device and displays the video seen through the lens. A separate control, named RecordControl, is used for recording, and we’ll discuss this later here.Next, we will use a Canvas object to display the video seen through the mobile phone camera. To display the video, we will write a class called VideoCanvas, which extends the Canvas class.After creating the VideoControl object, we must pass it to our VideoCanvas to display the video from the camera. aVideoCanvas = new VideoCanvas(); aVideoCanvas.initControls(videoControl, player); We also have to set the VideoCanvas object to currentDisplay and start the player. The following two lines do these jobs:parentMidlet.getDisplay().setCurrent(aVideoCanvas); player.start(); Another important point: while dealing with the player and manager, we must use them on a thread that is separate from the rest of the application to make sure that the whole application does not hang or become blocked by a deadlock. You might have already noticed that the showCamera() method has thus been invoked on a separate thread (CameraThread) from the initComponents() method and not directly.Let’s now write the VideoCanvas class, shown in Listing 6.Listing 6. VideoCanvas code to display the captured or recorded video package com.srijeeb.jme; // All the required imports public class VideoCanvas extends Canvas { private VideoControl vc = null; private Player plr = null; public void initControls(VideoControl videoControl, Player player) { int width = getWidth(); int height = getHeight(); this.vc = videoControl; this.plr = player; videoControl.initDisplayMode(VideoControl.USE_DIRECT_VIDEO, this); try { videoControl.setDisplayLocation(2, 2); videoControl.setDisplaySize(width - 4, height - 4); } catch (MediaException me) { try { videoControl.setDisplayFullScreen(true); } catch (MediaException me2) {} } videoControl.setVisible(true); } public void paint(Graphics g) { int width = getWidth(); int height = getHeight(); // Draw a green border around the VideoControl. g.setColor(0x00ff00); g.drawRect(0, 0, width - 1, height - 1); g.drawRect(1, 1, width - 3, height - 3); } } Note the following important points in Listing 6:We have to set the display mode of the video using the initDisplayMode() method. There are two possible modes, USE_DIRECT_VIDEO and USE_GUI_PRIMITIVE. USE_DIRECT_VIDEO can be used on platforms with a LCD user interface (LCDUI). In this mode, the video is directly rendered onto the canvas.Then we set the display location and display size using the setDisplayLocation() and setDisplaySize() methods, respectively.Finally, in the paint() method, we draw a green border around the video displayed in the Canvas using the setColor() and drawRect() methods on Graphics object.We are almost ready to test our application in the Wireless Toolkit. First, you need to copy another image (Icon.png) from the supplied source to the /apps/MobileVideoApp/res/images directory. Once you’ve done that, build the application in the Wireless Toolkit and click the Run icon. As we have added another command to our PropertyForm, we will now be able to see the OK option on our property screen, as shown in Figure 6.Selecting the OK option displays the Options screen. From this screen, select the Record Video option, as shown in Figure 7, and then select OK from the menu at the bottom of the screen.You will now be able to see a video, as shown in Figure 8. Because we’re dealing with the Wireless Toolkit, the video that you see in the emulator is a previously recorded sample video from Sun Microsystems. This emulates the camera of a wireless device in action. When the same code is deployed on a real JSR-135-supported device, the screen will display the video seen through your device camera.Figure 8. Wireless Toolkit emulator emulating the camera for capturing the videoAt this point, you can once again package the application and deploy it to a real mobile device if you want to test the camera capabilities. Recording video on a mobile deviceTo add the recording functionality, we have to modify VideoRecordingForm.java. We will add the following functionality to our existing VideoRecordingForm:Start recordingStop recordingShow the recorded videoMute the recorded video while playingControl the volume of the recorded video while playingTo add this functionality, let’s first add some command buttons to VideoRecordingForm.java, as shown in Listing 7.Listing 7. Command declarations added to VideoRecordingForm // The following command will be used to start the recording private final static Command CMD_RECORD = new Command("Record", Command.SCREEN, 1); // The following command will be used to stop the recording private final static Command CMD_STOP_RECORD = new Command("Stop Record", Command.SCREEN, 1); // The following command will be used to show the recorded video private final static Command CMD_SHOW_RECORDED = new Command("Show Recorded", Command.SCREEN, 1); // The following command will be used to mute the player // while showing the recorded video private final static Command CMD_MUTE = new Command("Mute", Command.SCREEN, 1); // The following command will be used to bring back the volume // from a mute state private final static Command CMD_UNMUTE = new Command("UnMute", Command.SCREEN, 1); // The following command will be used to play the // recorded video in highest volume private final static Command CMD_HIGHEST_VOLUME = new Command("Highest Vol", Command.SCREEN, 1); // The following command will be used to play // the recorded video in medium volume private final static Command CMD_MEDIUM_VOLUME = new Command("Medium Vol", Command.SCREEN, 1); We also have to add another VolumeControl definition to VideoRecordingForm.java for controlling the volume, using: private VolumeControl volumeControl; As mentioned earlier, all the camera- and player-related operations should take place in a separate thread. Thus, we have written the VideoRecordingThreadClass, shown in Listing 8, which will be the inner class of our VideoRecordingForm.Listing 8. VideoRecordingThread, an inner class of VideoRecordingForm class VideoRecordingThread extends Thread { RecordControl rc; public void run() { recordVideo(); } public void recordVideo(){ try { rc = (RecordControl)player.getControl("RecordControl"); if ( rc == null ) { JMEUtility.showErrorAlert ("ERROR", "Recording of Video not supported", 4000, self, parentMidlet); return; } output = new ByteArrayOutputStream(); rc.setRecordStream(output); rc.startRecord(); } catch (Exception e) { JMEUtility.showErrorAlert("ERROR", e.getMessage(), 4000, self, parentMidlet); } } public void stopRecord(){ try{ if ( rc != null ) { rc.stopRecord(); rc.commit(); } }catch (Exception e){ JMEUtility.showErrorAlert("ERROR", e.getMessage(), 4000, self, parentMidlet); } } } Take a look at the recordVideo() method in Listing 8. We are grabbing the handle of RecordControl by calling the getControl() method on the Player object and passing a String value of “RecordControl” as the parameter.In the very next line, we check to see whether or not the RecordControl is returned as null; if it is, then video recording is not supported. While creating the player, we have used capture://video, so our Player object is now attached to video. When I deployed the application to a Nokia 6600, I got a null return from this method. But when I deployed the same code to a Nokia 3230, I got the RecordControl object. This means that video recording is supported on the Nokia 3230 through MMA, but not on the Nokia 6600.You should also note that some devices need capture://audio_video instead of capture://video as the createPlayer parameter for the Manager. Thus, you may have to do some extra tricks to write generic code to capture the video.In the next line, we create an instance of ByteArrayOutputStream to hold the recorded video temporarily. And with the setRecordStream() method call on the RecordControl object, we ensure that our instance of ByteArrayOutputStream is attached with the RecordControl.The next line (rc.startRecord()) starts the recording.Now let’s have a look into our stopRecord() method. This method is pretty simple. We call the stopRecord() method and commit() method on the RecordControl object to stop and commit our recording. If we don’t commit the recording, no data will be available in our ByteArrayOutputStream. stopRecord() is generally called to pause the recording. So, after stopRecord() is called, if we call startRecord(), then the recording is resumed. And commit() actually completes the current recording; it implicitly calls stopRecord(). Hence, although in the example we have called both stopRecord() and commit(), we could have conceivably used only commit(); I used both methods just so that you could see both of them in action.I have seen the player.getControl("RecordControl") call return null while running within the Sun Wireless Toolkit, so we will have some difficulty testing this code there. (When I deployed the code to a Nokia 3230, I got an instance of RecordControl object, and all the video recording functions worked fine.) If we get a null return for this call in the emulator, we won’t be able to proceed further to test the other functionality we’ll consider in this series, such as viewing recorded video or uploading the video to a server. Hence, we need to tweak our code a little bit so that at least we can test the rest of our code in the Wireless Toolkit. We’ll walk through those tweaks in the next steps.To resolve the problem with the emulator, I have supplied in the sample code package a dummy MPEG file (the same file used by the emulator to emulate the camera) named dummyrecord.mpg. Place this file in the /apps/MobileVideoApp/res/video directory. Once you’ve done this, you need to modify the VideoRecordingThread class a bit. Remember, this is a temporary arrangement to work around a problem with Wireless Toolkit emulator. The code that we are going to add now is not necessary if you’re going to deploy to a real mobile device that supports video recording using MMA. We’ll have to remove or comment out this code later.First, we must add a dummy method that will read dummyrecord.mpg and store its contents in ByteArrayOutputStream. And in the run() method of VideoRecordingThread, instead of calling recordVideo(), we will call our new dummy method, recordDummy().So, our VideoRecordingThread class looks like Listing 9.Listing 9. Temporary arrangement to bypass a problem in Wireless Toolkit while recording video class VideoRecordingThread extends Thread { RecordControl rc; public void run() { // Temporarily comment our recordVideo call // and use recordDummy //recordVideo(); recordDummy(); } public void recordVideo(){ //The old code remains as is } public void stopRecord(){ //The old code remains as is } public void recordDummy() { try { InputStream is = getClass().getResourceAsStream("/video/dummyrecord.mpg"); output = new ByteArrayOutputStream(); int len = is.available(); byte[] bytes = new byte[len]; is.read(bytes); output.write(bytes, 0, bytes.length); aVideoCanvas.removeCommand(CMD_RECORD); aVideoCanvas.addCommand(CMD_STOP_RECORD); contentType = "video/mpeg"; }catch(Exception e){ JMEUtility.showErrorAlert("ERROR", e.getMessage(), 4000, self, parentMidlet); } } } As you can see in Listing 9, we are reading a dummy MPEG file and populating our ByteArrayOutputStream. Remember that this is a temporary workaround; the recordDummy() call should be commented out and recordVideo() should be uncommented before you deploy this code to a real device. Also make sure you delete dummyrecord.mpg from /apps/MobileVideoApp/res/video directory before packaging and deploying it in real device; otherwise, your application JAR file will be unnecessary large.Once we have written our VideoRecordingThread class, we need to start the thread from our VideoRecordingForm class. Let’s handle the starting and stopping of video recording. To add these two capabilities, we add the lines in Listing 10 to the commandAction() method of VideoRecordingForm.java.Listing 10. Starting and stopping the VideoRecordingThread from commands public void commandAction(Command c, Displayable d) { //Earlier code as is else if (c == CMD_RECORD) { videoRecordThread = new VideoRecordingThread(); videoRecordThread.start(); } else if (c == CMD_STOP_RECORD) { videoRecordThread.stopRecord(); } } As mentioned earlier, all camera- and player-related operations should execute on a separate thread, so in Listing 10 we have created an instance of the VideoRecordingThread class (which is an inner class of VideoRecordingForm) and started the thread when the Record command is initiated by the user (c == CMD_RECORD). Similarly, when the Stop Record command is initiated by the user, the stopRecord() method of VideoRecordingThread will be invoked.We will now see how to use the PlayerListener interface to trap such events as starting and stopping the recording, or starting the player. To achieve this, we have to implement the javax.microedition.media.PlayerListener interface and the playerUpdate() method. This method is called to deliver an event to a registered listener when a Player event is observed.Now our VideoRecordingForm class definition looks like Listing 11.Listing 11. PlayerListener interface added to VideoRecordingForm public class VideoRecordingForm extends Form implements CommandListener, PlayerListener { ... ... //All existing code remains as is public void playerUpdate(Player plyr, String evt, Object evtData) { //We need to write our code here } } Different types of Player events can be generated at different points in the application. For the whole list, please consult the MMAPI documentation. In our example application, we will primarily deal will the following events:RECORD_STARTED: Posted when recording is started.RECORD_STOPPED: Posted when recording is stopped.STARTED: Posted when a Player is started.RECORD_ERROR: Posted when an error occurs during recording.First, take a look at all of the code inside the playerUpdate() method, shown in Listing 12, and then examine the functionality within it.Listing 12. Code inside the playerUpdate() method of VideoRecordingForm public void playerUpdate(Player plyr, String evt, Object evtData) { if ( evt == PlayerListener.STARTED && currentState != CUSTOM_SHOW_RECORDED ) { aVideoCanvas.addCommand(CMD_RECORD); aVideoCanvas.removeCommand(CMD_MUTE); aVideoCanvas.removeCommand(CMD_UNMUTE); aVideoCanvas.removeCommand(CMD_HIGHEST_VOLUME); aVideoCanvas.removeCommand(CMD_MEDIUM_VOLUME); } else if ( evt == PlayerListener.RECORD_STARTED ) { aVideoCanvas.removeCommand(CMD_RECORD); aVideoCanvas.addCommand(CMD_STOP_RECORD); } else if ( evt == PlayerListener.RECORD_STOPPED ) { aVideoCanvas.removeCommand(CMD_STOP_RECORD); aVideoCanvas.addCommand(CMD_SHOW_RECORDED); } else if ( evt == PlayerListener.END_OF_MEDIA && currentState == CUSTOM_SHOW_RECORDED) { currentState = CUSTOM_OK_TO_RECORD; aVideoCanvas.addCommand(CMD_RECORD); aVideoCanvas.removeCommand(CMD_STOP_RECORD); aVideoCanvas.removeCommand(CMD_SHOW_RECORDED); (new CameraThread()).start(); } else if ( evt == PlayerListener.RECORD_ERROR ) { JMEUtility.showErrorAlert("ERROR", (String)evtData, 4000, this, parentMidlet); } else if ( evt == PlayerListener.ERROR ) { JMEUtility.showErrorAlert("ERROR", (String)evtData, 4000, this, parentMidlet); } } When the player is started, the PlayerListener.STARTED event is generated, and we enter into our first if clause. We dynamically add a command to the video canvas for recording. We also remove the other commands, such as mute, unmute, highest volume and medium volume, if they’re present. What we are doing here is not that important, but note that when the player is started, you will get a notification that you can trap and can perform some appropriate action in your code. You can make your controls very dynamic, depending on the state of the player.Similarly, when the recording is started, we will receive a notification via the PlayerListener.RECORD_STARTED event. Once we get this notification, we remove the Record command (so that user can’t invoke it again while recording is in progress) and add the Stop Record command. When the user executes the Stop Record command and the recording stops, the PlayerListener.RECORD_STOPPED event is delivered to the playerUpdate() method. Once we receive this event, we remove the Stop Record command and add the Show Recorded command dynamically in the canvas.We have now added our playerUpdate() method. But we also have to attach our player object to this player listener. To attach the Player object to PlayerListener, we need to call the addPlayerListener() method on the Player object. Add the line in Listing 13 in the showCamera() method just after creating the Player from the Manager.Listing 13. Registering the PlayerListener private void showCamera() { //existing code as is player = Manager.createPlayer("capture://video"); player.addPlayerListener(this); player.realize(); //existing code as is } As shown in Listing 14, we also have to add a few variables that we have used in different methods in our VideoRecordingForm class.Listing 14. Extra variables that need to be added in VideoRecordingForm private short currentState = -1; private static final short CUSTOM_SHOW_RECORDED = 1; private static final short CUSTOM_OK_TO_RECORD = 2; VideoRecordingThread videoRecordThread = null; ByteArrayOutputStream output = null; Now you can test the application, though you will not find any major visible difference from the previous iteration. For example, you will be able to see a screen, such as Figure 9, where you will be presented with the Record option (earlier, we had only Exit and Back). Once you click Record on the real device, the recording should begin; remember, though, that in the Wireless Toolkit emulator, we just read dummyrecord.mpg and play it back.When the recording begins, the RECORD_STARTED event is fired, and we remove the Record command and add the Stop Record command, as shown in Figure 10. Remember, the Wireless Toolkit emulator is not able to start the recording of the video, so events like RECORD_STARTED and RECORD_STOPPED are not generated. Hence, the removal of the Record command and addition of the Stop Record command is handled in the recordDummy() method of the VideoRecordingThread class. Before you deploy the code to a real device, comment out the recordDummy() call and uncomment the recordVideo() call inside the run() method of VideoRecordingThread, so that on the real device the PlayerListener will be in action to deliver the RECORD_STARTED event.You can now try to package and deploy the application to a real JSR-135-supported device, or you can wait until we’ve added functionality to watch the recorded video.To view the recorded video, you will need to write another thread to make sure the main thread is not blocked by video-playback processing. The code in Listing 15 shows the VideoViewerThread class, which is an inner class of VideoRecordingThread. We will use the VolumeControl object definition of our main VideoRecordingForm class to control the volume while viewing the recorded video.Listing 15. VideoViewerThread to view the recorded video //This is an inner class of VideoRecordingForm class VideoViewerThread extends Thread { public void run() { viewVideo(); } public void viewVideo() { try { releaseResources(); ByteArrayInputStream bis = new ByteArrayInputStream(output.toByteArray()); player = Manager.createPlayer(bis, contentType); player.realize(); videoControl = (VideoControl)player.getControl("VideoControl"); volumeControl = (VolumeControl)player.getControl("VolumeControl"); volumeControl.setLevel(100); if ( aVideoCanvas != null ) { aVideoCanvas.initControls(videoControl, player); parentMidlet.getDisplay().setCurrent(aVideoCanvas); player.start(); } } catch (Exception e){ JMEUtility.showErrorAlert("ERROR", e.getMessage(), 4000, self, parentMidlet); } } } The important method in Listing 15 is viewVideo(). Here again we are calling the releaseResources() method to stop and close the Player object. Then we create the ByteArrayInputStream from the ByteArrayOutputStream (this ByteArrayOutputStream contains the recorded video). Next, we create the Player object from the ByteArrayInputStream. Then we call the realize() method of the player life cycle. The most important aspect of the Manager.createPlayer() method is that it takes two parameters. The first is the InputStream object that we created from our earlier ByteArrayOutputStream; the second is the content type of the byte array.When we initially started the player in the showCamera() method, we stored the content type returned by the Player object by invoking the getContentType() method on it. (Look at the last part of showCamera(), just before the catch block starts.) We stored the content type in the contentType variable at that point. Now we pass the stored type in the second parameter of the createPlayer() method of the Manager class.Next, we create the VideoControl (similar to our earlier example of recording video). Additionally, we create the VolumeControl object in this example, so that we will be able to control the volume of the recorded video while viewing it. In Listing 15, we used the setLevel() method on the VolumeControl object. The level can vary from 0 (muted) to 100 (the highest volume).Finally, we pass the videoControl to our CameraCanvas and start the Player.Now we will enhance our commandAction() method to add the actions for showing the recorded video, controlling the volume, and so on, as shown in Listing 16.Listing 16. Enhanced commandAction() to accommodate volume control and the viewing of recorded video public void commandAction(Command c, Displayable d) { //Earlier code remain as is else if (c == CMD_SHOW_RECORDED) { currentState = CUSTOM_SHOW_RECORDED; aVideoCanvas.addCommand(CMD_MUTE); aVideoCanvas.addCommand(CMD_HIGHEST_VOLUME); aVideoCanvas.addCommand(CMD_MEDIUM_VOLUME); aVideoCanvas.removeCommand(CMD_SHOW_RECORDED); ( new VideoViewerThread() ).start(); } else if (c == CMD_MUTE) { if ( volumeControl != null ) { if ( !volumeControl.isMuted() ) { volumeControl.setMute(true); aVideoCanvas.removeCommand(CMD_MUTE); aVideoCanvas.addCommand(CMD_UNMUTE); } } else { JMEUtility.showErrorAlert ("ERROR", "NOT ABLE TO GET VOLUME CONTROL", 4000, this, parentMidlet); } } else if (c == CMD_UNMUTE) { if ( volumeControl != null ) { if ( volumeControl.isMuted() ) { volumeControl.setMute(false); aVideoCanvas.removeCommand(CMD_UNMUTE); aVideoCanvas.addCommand(CMD_MUTE); } } else { JMEUtility.showErrorAlert ("ERROR", "NOT ABLE TO GET VOLUME CONTROL", 4000, this, parentMidlet); } } else if (c == CMD_HIGHEST_VOLUME) { if ( volumeControl != null ) { volumeControl.setLevel(100); } else { JMEUtility.showErrorAlert ("ERROR", "NOT ABLE TO GET VOLUME CONTROL", 4000, this, parentMidlet); } } else if (c == CMD_MEDIUM_VOLUME) { if ( volumeControl != null ) { volumeControl.setLevel(40); } else { JMEUtility.showErrorAlert ("ERROR", "NOT ABLE TO GET VOLUME CONTROL", 4000, this, parentMidlet); } } } In Listing 16, when the Show Recorded command (c == CMD_SHOW_RECORDED) is executed, we add a few more commands, such as Mute, Highest Volume and Medium Volume, in the VideoCanvas. We also remove the Show Recorded command. The most important thing to keep in mind here is that we are starting the VideoViewerThread. When the Show Recorded command is invoked, then the viewVideo() method of VideoViewerThread will be invoked in a separate thread, and we will be able to see the recorded video.When we execute the mute command (c == CMD_MUTE), we determine whether or not the volume is already muted by executing the isMuted() method on VolumeControl. If it is not already muted, we mute the VolumeControl using the setMute() method by passing true into the method call. Passing false takes us out of the mute mode. The code snippets for the CMD_UNMUTE, CMD_HIGHEST_VOLUME, and CMD_MEDIUM_VOLUME commands should be self-explanatory, as we have already discussed the use of the isMuted(), setMute(), and setLevel() methods on the VolumeControl object.Now it’s time to test our application again. Because the recording of video is not supported in the Wireless Toolkit and the PlayerListener.RECORD_STOPPED event will thus not be delivered, we need another temporary trick to test the code in the emulator. We want to remove the Stop Record command and add the Show Recorded command when we stop recording. So, we temporarily add the two lines of code in Listing 17 inside the commandAction() method under c == CMD_STOP_RECORD. We will remove these two lines when we deploy the code to a real device.Listing 17. Temporary code to remove the Stop Record command and add the Show Record command when recording has stopped else if (c == CMD_STOP_RECORD) { videoRecordThread.stopRecord(); //Comment out the following two lines while deploying //in real device aVideoCanvas.removeCommand(CMD_STOP_RECORD); aVideoCanvas.addCommand(CMD_SHOW_RECORDED); } Now we can build the application and click the Wireless Toolkit’s Run icon. When we stop recording, we will be able to see the extra Show Recorded option in the menu, illustrated in Figure 11.If we had selected Show Recorded on a real device, the recorded part of the video would be displayed; on the emulator, though, we’ve hard-coded this command to load dummyrecord.mpg, so you will only be able to see that video file.Once you are in view mode, select the menu again. You will be able to see more command options related to the volume of the playback, as shown in Figure 12. You can test these options in the Wireless Toolkit.Figure 12. Wireless Toolkit emulator displaying the volume control optionsIf you want to deploy the current version of the application to a real device, comment out or remove the two lines in commandAction() that we added just before testing the application, as shown in Listing 18.Listing 18. Commenting out the temporary code before deployment onto a real mobile device else if (c == CMD_STOP_RECORD) { videoRecordThread.stopRecord(); //aVideoCanvas.removeCommand(CMD_STOP_RECORD); //aVideoCanvas.addCommand(CMD_SHOW_RECORDED); } You should also comment out the recordDummy() call and uncomment the recordVideo() call inside the run() method of the VideoRecordingThread class.Conclusion, and a look aheadIn this article you have learned about the important system properties related to video processing using JME and the Mobile Media API. I’ve showed you how to record a video on a mobile device using MMAPI and how to play it back. I’ve also showed you how to use the PlayerListener to listen for MMAPI events, and how to use VolumeControl objects to control the volume of the recorded video while viewing.In Part 2 I will continue to develop the mobile video application example from this article. You will learn how to upload the recorded video to a server application and how to fetch it from the server to play back on your mobile device.Srijeeb Roy holds a bachelor’s degree in computer science and engineering from Jadavpur University, Kolkata, India. He is currently working as a technical architect at Tata Consultancy Services Limited on a Java EE-based project. He has been working with Java SE and EE for more than eight years, and more has than nine years of experience in the IT industry. He has developed several in-house frameworks and reusable components in Java for his company and clients. He has also worked in several other areas, such as Forte, CORBA, and Java ME. JavaTechnology IndustryConsumer ElectronicsSoftware DevelopmentMedia and Entertainment Industry