By user924

2018-03-02 09:26:49 8 Comments

What is the best way to record a video with augmented reality? (adding text, images logo to frames from iPhone/iPad camera)

Previously I was trying to figure out how to draw into CIImage (How to draw text into CIImage?) and convert CIImage back to CMSampleBuffer (CIImage back to CMSampleBuffer)

I almost did everything, only have problem with recording video using new CMSampleBuffer in AVAssetWriterInput

But this solution anyway isn't good at all, it eats a lot of CPU while converting CIImage to CVPixelBuffer (ciContext.render(ciImage!, to: aBuffer))

So I want to stop here and find some other ways to record a video with augmented reality (for example dynamically add (draw) text inside frames while encoding video into mp4 file)

Here what I've tried and don't want to use anymore...

// convert original CMSampleBuffer to CIImage, 
// combine multiple `CIImage`s into one (adding augmented reality -  
// text or some additional images)
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: pixelBuffer)
var outputImage: CIImage?
let images : Array<CIImage> = [ciimage, ciimageSec!] // add all your CIImages that you'd like to combine
for image in images {
    outputImage = outputImage == nil ? image : image.composited(over: outputImage!)

// allocate this class variable once         
if pixelBufferNew == nil {
    CVPixelBufferCreate(kCFAllocatorSystemDefault, CVPixelBufferGetWidth(pixelBuffer),  CVPixelBufferGetHeight(pixelBuffer), kCVPixelFormatType_32BGRA, nil, &pixelBufferNew)

// convert CIImage to CVPixelBuffer
let ciContext = CIContext(options: nil)
if let aBuffer = pixelBufferNew {
    ciContext.render(outputImage!, to: aBuffer) // >>> IT EATS A LOT OF <<< CPU

// convert new CVPixelBuffer to new CMSampleBuffer
var sampleTime = CMSampleTimingInfo()
sampleTime.duration = CMSampleBufferGetDuration(sampleBuffer)
sampleTime.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
sampleTime.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
var videoInfo: CMVideoFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, &videoInfo)
var oBuf: CMSampleBuffer?
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, true, nil, nil, videoInfo!, &sampleTime, &oBuf)

try to append new CMSampleBuffer into a file (.mp4) using 
AVAssetWriter & AVAssetWriterInput... (I met errors with it, original buffer works ok 
- "from func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)")

Is there any better solution?


@user924 2018-03-06 14:10:02

now I answer my own question

the best would be to use Objective-C++ class (.mm) where we can use OpenCV and easily/fast convert from CMSampleBuffer to cv::Mat and back to CMSampleBuffer after processing

we can easily call Objective-C++ functions from Swift

Related Questions

Sponsored Content

1 Answered Questions

[SOLVED] GLKView.display() method sometimes causes crash. EXC_BAD_ACCESS

22 Answered Questions

[SOLVED] augmented reality framework

0 Answered Questions

iOS Augmented Reality and geographical lines

3 Answered Questions

[SOLVED] Augmented reality Vuforia- What is a marker

1 Answered Questions

[SOLVED] Picture to Video augmented reality

9 Answered Questions

2 Answered Questions

[SOLVED] Augmented Reality

2 Answered Questions

[SOLVED] Augmented Reality SDK with OpenCV

0 Answered Questions

How to run remote video like local video in augmented reality for ios

2 Answered Questions

[SOLVED] Is it possible to record Augmented reality videos?

Sponsored Content