`
pvnz95pvnz
  • 浏览: 12999 次
最近访客 更多访客>>
社区版块
存档分类
最新评论

Using the Adobe AIR 2 NativeProcess API to create a screen recorder

 
阅读更多

  With the release of AIR 2, Adobe gives developers one of their most requested features-the ability to launch and communicate with native processes. You can still use Adobe AIR to build beautiful user interfaces with the expressiveness and ease of the Flash Platform. But now, you can also build a new breed of applications that mashup existing desktop applications to create something entirely new. This small API addition becomes an enabler for all kinds of applications. If you have been stuck before on a project because you needed some functionality that was not included in the native AIR API, now there is no excuse to wait because you can basically extend AIR by yourself. To demonstrate this new capability I have created a screen recording application that interacts with two well-known desktop applications: FFmpeg and VLC. Setting up the project
  Follow these steps to set up the project in Flash Builder: Download and unzip the sample files for this article.
  In Flash Builder, choose File > Import Flex Project (FXP).
  Navigate to and select the screenrecording.fxp file you just unzipped.
  Click Finish.
  To build the project, you'll also need to install the AIR 2 SDK.
  Download the AIR 2 SDK from Adobe Labs.
  Follow the instructions in the Release Notes for how to overlay the Adobe AIR SDK for you with the Flex SDK.
  Next you'll need to update your compiler settings to use the new SDK.
  Right-click the project in Package Explorer and select Properties.
  Select Flex Compiler, select Use A Specific SDK, and then select the new SDK.
  Run Project > Clean.
  In the project you will find two applications (Screenrecording_VideoOnly.mxml and Screenrecording_VideoSound.mxml). The first application shows how to record just the screen. The second application builds upon the first one, and adds the ability to also record sound.  Recording the screen with VLC
  You may know VLC as a media player that plays almost all available audio and video files, but it can do much more. It is a complete media framework that you can use to transcode all types of data and even stream media. One of its less well-known features is the ability to record the user's screen. One drawback of the VLC screen recording feature is that you cannot record the screen and audio at the same time. This is something that you are going to fix in this article by using another new feature in AIR 2 the ability to record the microphone input. Preparing the app as a native application
  In order to use the new NativeProcess class in your application you need to add one line to your application descriptor file:  You'll find the line above in Screenrecording_VideoOnly-app.xml and Screenrecording_VideoSound-app.xml. This tells the compiler that you're not creating a normal AIR file but a native application, which will have extended privileges such as accessing external applications. The screen recording code is in the class de.benz.exec.ScreenRecorder. To use this class, you pass it the command line application you want to use as a File object. In a real world application you'll need to handle different file paths on different platforms. For example on Mac OS X the default path for the VLC program is /Applications/VLC.app/Contents/MacOS/VLCand on Windows it is typically C:\Program Files\VideoLAN\VLC\vlc.exe. One solution would be to package the external applications together with your native installer and then use the constants of the File class to access the binaries independently of the user's platform. Another solution is to let the user select the path the first time your application starts. For the sake of simplicity the paths are hard coded in this example; depending on your system you may need to adjust them.  To create a native process in AIR you need to instantiate a NativeProcessStartupInfo object, which will store all the needed information to actually start the process; for example: After instantiating the NativeProcessStartupInfo object, you pass it the File object that points to the VLC executable. Setting the command line arguments
  Now you need to provide the command line arguments. The NativeProcessStartupInfo object has a property arguments, which is a Vector with all the necessary arguments. VLC can be completely controlled via the command line. However, most of the time you use VLC through its provided interface, so it can be a bit difficult to determine which arguments to use. After some research I came up with the following combination: var processArgs:Vector. = new Vector.(); processArgs.push("-I"); processArgs.push("rc"); processArgs.push("--rc-fake-tty"); processArgs.push("screen://"); processArgs.push(":screen-fps=15"); processArgs.push(":screen-caching=100"); processArgs.push(":sout=#transcode{venc=x264{bfram es=0,nocabac,ref=1,nf,level=13,crf=24,partitions=no ne},vcodec=h264,fps=15,vb=3000,width=1024,height=74 0,acodec=none}:duplicate{dst=std{mux=mp4,access=fil e,dst='"+targetFile.nativePath+"'}}"); startupInfo.arguments = processArgs;
  The key arguments are: -I rc This tells VLC that it should not launch with the standard VLC interface but rather as an invisible process. The rc stands for remote control and allows you to interact with the running process.
  --rc-fake-tty This tells VLC that it will be controlled via standard input (STDIN) and standard output (STDOUT).
  screen:// This tells VLC that you want to use the computer's screen as the input signal.
  :screen-fps=15 This sets the frame rate at which VLC tries to grab the screen. This should be carefully chosen as it dramatically affects the performance. Screen recording in general is a rather processor intensive operation. Frame rates that are too high can make the system unresponsive. On my system, a frame rate of 15 works well and should be enough for most screen recording tasks.
  :sout This argument is used to set all the parameters for the encoding of the screen recording, including codec options, frame rates, bit rates, and the dimensions of the final video. You can set these options in a vast range of combinations. For this example it's only important to make sure it encodes to a video file that Flash Player supports. The provided example encodes the screen recording using H.264. Note that I set the audio option to none ( acodec=none ) because VLC does not support recording the screen and audio at the same time.
  :duplicate This sets the container format (in this case mp4 ) and the destination of the generated video. It also tells VLC to generate a physical file ( access=file ).
  Now that the NativeProcessStartupInfo object is complete, you can create the actual process by creating a new NativeProcess object and add event listeners for the standard output, error, and exit events: private function onOutputData(evt:ProgressEvent):void{ var outputData:String = p.standardOutput.readUTFBytes(p.standardOutput.byt esAvailable); trace(outputData); } private function onErrorData(evt:ProgressEvent):void{ var errorData:String = p.standardError.readUTFBytes(p.standardError.bytes Available); trace(errorData); }
  Communicating with the running process
  Now the process is running and VLC is recording the screen. To stop the recording, it is not enough to just kill the process because that would result in a corrupted video file. Instead you have to communicate with the running process, in this case through the standard input (STDIN). In AIR 2 you can send data to a running process pipe by using the standardInput property of the NativeProcess class. The standardInput property is of type IDataOutput so you can write to it as you would a ByteArray. In ScreenRecorder.as, stopRecording() is used to stop the recording process: This method writes a stop command followed by a newline character. The newline character has the same effect as pressing return in your terminal window to actually issue the command. With this command VLC is triggered to properly close the file stream of the recorded video. After that it is safe to quit VLC. However, because writing the actual file takes a bit longer the code postpones this task with a timeout so VLC can first finish writing and closing the file. To quit the process the kill() method issues a quit command like this: That is it for the first part. You now have a tool that records the user's screen. So far the file only contains the video data and no audio data. In the next part you will use the native microphone recording capabilities of AIR to also record the microphone sound. Recording audio with AIR 2
  In the previous section you learned how to record the video data without sound using VLC. Fortunately AIR 2 introduced the ability to access the raw PCM sound data from a connected microphone. Prior to AIR 2 the only way to record sound from the line in was to send the recorded bytes to a Flash Media Server instance through a NetConnection, which was not optimal for most desktop applications. In AIR 2, developers have direct access to microphone through the new SampleDataEvent.SAMPLE_DATA event thrown by an instance of the Microphone class. First you'll want to give the user the opportunity to choose a connected device for recording. You can do this by adding a ComboBox to the user interface that holds a list of connected microphones: This requires just one line of MXML in the main application class. The code above adds a ComboBox with the data provider set to the static property names of the Microphone class wrapped in an ArrayCollection. The actual recording logic is in the class de.benz.screenrecording.AudioRecorder. The startRecording method in AudioRecorder.as takes two parameters as arguments. The first is the selected index of the input device and the second is a reference to the file in which the recorded sound data is to be stored.  To start the recording you must first configure the microphone. You get an instance of the microphone by calling the static function getMicrophone() of the Microphone class.  To ensure that the microphone is constantly recorded even when there is silence, set the silence level to zero. The line microphone.rate = 44; sets the sound rate, which determines how many samples are taken from the input signal per second (in this case 44100).  As a last step, you need to add an event listener for the SampleDataEvent.SAMPLE_DATA to call the onMicData function. The next step is to create the stream that is used to write the sound bytes. The file stream is opened in APPEND mode so that the new bytes get appended to the end of the file each time it is written to while recording. For this same reason, you also need to make sure to delete the target file prior to opening the stream; otherwise the bytes would be appended to the previous recording session. The actual writing of the sound bytes takes place in the onMicData method, which is called repeatedly while recording by the SampleDataEvent.SAMPLE_DATA event. The event has a data property, which is a ByteArray containing the recorded sound samples. Each sound sample is a 32-bit floating point number containing normalized values between -1 and 1 , which can be read from the byte array by using the readFloat() method. To work with these bytes later, you need to convert the values to 16-bit signed integers, which can contain values between -32768 to +32767 . To perform the conversion, simply multiply the float values by 32767. The sample is then written to the file stream using writeShort() . To stop the recording, the SampleDataEvent.SAMPLE_DATA event listener is removed and the file stream closed. If you run the application it will produce two files: the soundless H.264 video file and a file containing the raw bytes of the sound. In the next and final section you will to use FFmpeg to combine these two files and produce one file that contains both the video and the audio track.  Merging audio and video with FFmpeg
  FFmpeg is a cross-platform application for manipulating a wide variety of media files. It's completely controllable via the command line and it understands most of today's popular video and audio codecs and file formats. Its command line access and cross-platform nature make it a perfect match for use in an AIR application. In this section you'll see how to use FFmpeg as an external process to merge the video track and the raw sound bytes. The general setup for the VideoSoundMerger class is pretty similar to the ScreenRecorder class since it also deals with running a native process. The process arguments that are passed to the NativeProcessStartupInfo object are different. Also, there is no need to interact with the running process as it will just process the command and close automatically when it is done. Here are the command line arguments used to merge a sound and video track with FFmpeg: -isync This tells FFmpeg to sync a video with an audio track.
  -f s16be This tells FFmpeg that the samples of the sound track are encoded as 16-bit integers in big endian.
  -i  This is used to pass the path to the sound file.
  -i  This is used to pass the path of the video file.
  -acodec libfaac This tells FFmpeg to use an advanced audio coding (aac) encoder for the video track.
  -ab 128 This tells FFmpeg to encode the sound with a bit rate of 128.
  -vcodeccopy This tells FFmpeg to use the codec of the input video. This is useful because it keeps the video from being encoded again, which saves time and results in a better quality video.
  In ActionScript these arguments look like this: var processArgs:Vector. = new Vector.(); processArgs.push("-isync"); processArgs.push("-f"); processArgs.push("s16be"); processArgs.push("-i"); processArgs.push(soundFile.nativePath); processArgs.push("-i"); processArgs.push(videoFile.nativePath); processArgs.push("-acodec"); processArgs.push("libfaac"); processArgs.push("-ab"); processArgs.push("128"); processArgs.push("-vcodec"); processArgs.push("copy"); processArgs.push(targetFile.nativePath);
  After FFmpeg is launched with a call to p.start(startupInfo) , it will generate the final video file of the screen recording. Where to go from here
  This article covered the basics of new NativeProcess API and microphone capabilities in AIR 2. The NativeProcess API opens many opportunities to extend your AIR applications. You've likely already thought of some exciting uses for it already. http://www.adobe.com/devnet/air/flex/articles/air_ screenrecording.html
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics