-
public abstract class MediaEncoderBase class for single-track encoders, coordinated by a MediaEncoderEngine. For the lifecycle of this class, read comments in the engine class. This class manages a background thread and streamlines events on this thread which we call the EncoderThread: 1. When prepare is called, we call onPrepare on the encoder thread. 2. When start is called, we call onStart on the encoder thread. 3. When notify is called, we call onEvent on the encoder thread. 4. After starting, encoders are free to acquire an input buffer with tryAcquireInputBuffer or acquireInputBuffer. 5. After getting the input buffer, they are free to fill it with data. 6. After filling it with data, they are required to call encodeInputBuffer for encoding to take place. 7. After this happens, or at regular intervals, or whenever they want, encoders can then call drainOutput with a false parameter to fetch the encoded data and pass it to the engine (so it can be written to the muxer). 8. When stop is called - either by the engine user, or as a consequence of having called requestStop - we call onStop on the encoder thread. 9. The onStop implementation should, as fast as possible, stop reading, signal the end of input stream (there are two ways to do so), and finally call drainOutput for the last time, with a true parameter. 10. Once everything is drained, we will call onStopped, on a unspecified thread. There, subclasses can perform extra cleanup of their own resources. For VIDEO encoders, things are much easier because we skip the whole input part. See description in VideoMediaEncoder. MAX LENGTH CONSTRAINT For max length constraint, it will be checked automatically during drainOutput, OR subclasses can provide an hint to this encoder using notifyMaxLengthReached. In this second case, we can request a stop at reading time, so we avoid useless readings in certain setups (where drain is called a lot after reading). TIMING Subclasses can use timestamps (in microseconds) in any reference system they prefer. For instance, it might be the nanoTime reference, or some reference provided by SurfaceTextures. However, they are required to call notifyFirstFrameMillis and pass the milliseconds of the first frame in the currentTimeMillis reference, so something that we can coordinate on.
-
-
Method Summary
Modifier and Type Method Description final voidprepare(MediaEncoderEngine.Controller controller, long maxLengthUs)This encoder was attached to the engine. final voidstart()Start recording. final voidnotify(String event, Object data)The caller notifying of a certain event occurring.Should analyze the string and see if the event is important. final voidstop()Stop recording. -
-
Method Detail
-
prepare
final void prepare(MediaEncoderEngine.Controller controller, long maxLengthUs)
-
start
final void start()
Start recording. This might be a lightweight operationin case the encoder needs to wait for a certain eventlike a "frame available".The STATE_STARTED state will be set when draining for thefirst time (not when onStart ends).NOTE: it's important to call post instead of run()!
-
notify
final void notify(String event, Object data)
The caller notifying of a certain event occurring.Should analyze the string and see if the event is important.NOTE: it's important to call post instead of run()!
- Parameters:
event- what happeneddata- object
-
-
-
-