EsoErik

Wednesday, March 4, 2015

 

Rendering a QGraphicsScene Directly to a Numpy Array with PyQt5 for Streaming MPEG-4 Encoding with FFmpeg

Qt5's QGraphicsScene, QGraphicsItem, QGraphicsView, and associated classes - together called the "graphics view framework" - are quite powerful.  They are also fully exposed by PyQt5.  Even rendering a QGraphicsScene directly to a numpy.ndarray works, and without requiring associating a QGraphicsView with the scene!  However, knowing a bit about Qt, C++, Python, Numpy, ctypes, and sip is required to figure this out, which is asking a lot, and probably explains why this trick isn't seen more often.

Transcript of an IPython terminal session demonstrating direct QGraphicsScene to Numpy array rendering:

In [1]: from PyQt5 import Qt

In [2]:  %gui qt
Out[2]: <PyQt5.QtWidgets.QApplication at 0x10e94f9d8>

In [3]: gs = Qt.QGraphicsScene()

In [4]: gs.addRect(10, 10, 100, 200, Qt.QPen(Qt.QColor(Qt.Qt.red)), Qt.QBrush(Qt.QColor(Qt.Qt.blue)))
Out[4]: <PyQt5.QtWidgets.QGraphicsRectItem at 0x11e473e58>

In [5]: gs.addText('hello world')
Out[5]: <PyQt5.QtWidgets.QGraphicsTextItem at 0x11d81d9d8>

In [6]: _5.moveBy(50,50)

In [7]: image = numpy.zeros((600,800,3),dtype=numpy.uint8)

In [8]: gs.setBackgroundBrush(Qt.QBrush(Qt.Qt.black))

In [9]: import sip

In [10]: import skimage.io as skio

In [11]: plt.ion()

In [12]: qimage = Qt.QImage(sip.voidptr(image.ctypes.data), 800, 600, Qt.QImage.Format_RGB888)

In [13]: qpainter = Qt.QPainter()

In [14]: qpainter.begin(qimage)
Out[14]: True

In [15]: qpainter.setRenderHint(Qt.QPainter.Antialiasing)

In [16]: gs.render(qpainter)

In [17]: qpainter.end()
Out[17]: True

In [18]: skio.imshow(image)

This has proven useful for procedural video composition in a research setting (eg, development and testing of computer vision algorithms, visualization of deltas in time lapse images, and various other things where I need to overlay text, vector graphics, etc. over a series of images).

Wrapping a QImage around an ndarray loaded by skimage.io or what-have-you is the same as wrapping a QImage around an ndarray you created manually:

[04:03 PM][ehvatum@pincuslab-2:~/zplrepo]> ipython
Python 3.4.2 (default, Oct 22 2014, 12:10:46) 
Type "copyright", "credits" or "license" for more information.

IPython 3.0.0-dev -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: from PyQt5 import Qt

In [2]: %gui qt
Out[2]: <PyQt5.QtWidgets.QApplication at 0x10dc809d8>

In [3]: gs = Qt.QGraphicsScene()

In [4]: import skimage.io as skio

In [5]: loaded_image = skio.imread('/Users/ehvatum/heic1015a.jpg')

In [6]: loaded_image.shape
Out[6]: (2006, 3924, 3)

In [8]: import sip

In [9]: gs.addPixmap(Qt.QPixmap(Qt.QImage(sip.voidptr(loaded_image.ctypes.data), 3924, 2006, Qt.QImage.Format_RGB888)))
Out[9]: <PyQt5.QtWidgets.QGraphicsPixmapItem at 0x114574b88>

Keep in mind that the QImage constructor's format argument must match the actual in-memory layout of the Numpy array you supplied.  So, it won't work to feed in any sort of complicated view of an ndarray: if the array you wish to wrap in a QImage is not contiguous and aligned, make a copy that is contiguous and aligned with numpy.ascontiguousarray, and retain this copy while the QImage exists (QImage does not keep a reference to the supplied array, so you must if it is to avoid being garbage collected).

For the next bit, streaming mpeg4 video encoding of the ndarrays that the QGraphicsScene rendered into, I use the moviepy package:


from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter

...

scene_rect = gs.sceneRect()
desired_size = scene_rect.width(), scene_rect.height()
# Odd value width or height causes problems for some codecs
if desired_size[0] % 2:
    desired_size[0] += 1
if desired_size[1] % 2:
    desired_size[1] += 1
buffer = numpy.empty((desired_size[1], desired_size[0], 3), dtype=numpy.uint8)
qbuffer = Qt.QImage(sip.voidptr(self._buffer.ctypes.data), desired_size[0], desired_size[1], Qt.QImage.Format_RGB888)
ffmpeg_writer = FFMPEG_VideoWriter(str(self._fpath), desired_size, fps=10, codec='mpeg4', preset='veryslow', bitrate='15000k')

qpainter = Qt.QPainter()
for i in range(frame_count):
    advance_scene_by_one_frame(gs)
    qpainter.begin(qbuffer)
    qpainter.setRenderHint(Qt.QPainter.Antialiasing)
    gs.render(qpainter)
    qpainter.end()
    ffmpeg_writer.write_frame(buffer)

ffmpeg_writer.close()


Comments:
It turns out that RGB888 QImage rows are padded to 4-byte boundaries. The code in my post only works properly if buffer row stride happens to be divisible by 4. Instead, the buffer should be constructed along the following lines:
self._buffer = numpy.empty(row_stride * desired_size[1], dtype=numpy.uint8)
bdr = self._buffer.reshape((desired_size[1], row_stride))
bdr = bdr[:, :desired_size[0]*3]
self._buffer_data_region = bdr.reshape((desired_size[1], desired_size[0], 3))
self._qbuffer = Qt.QImage(sip.voidptr(self._buffer.ctypes.data), desired_size[0], desired_size[1], Qt.QImage.Format_RGB888)

With self._buffer_data_region fed to ffmpeg_writer:
self.ffmpeg_writer.write_frame(self._buffer_data_region)

 

Post a Comment

Subscribe to Post Comments [Atom]





<< Home

Archives

July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   September 2010   December 2010   January 2011   February 2011   April 2011   June 2011   August 2011   February 2012   June 2012   July 2012   August 2012   October 2012   November 2012   January 2014   April 2014   June 2014   August 2014   September 2014   October 2014   January 2015   March 2015   April 2015   June 2015   November 2015   December 2015   January 2016   June 2016   August 2016   January 2017   March 2017   April 2018   April 2019   June 2019   January 2020  

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]