Thomas Mountainborn
Unity OpenCV

Unity and OpenCV – Part three: Passing detection data to Unity

We’ll now integrate OpenCV face detection into Unity. In this part, the camera stream and pixel processing will be done within OpenCV, and we will only send the location and size of the detected faces to Unity. This approach is used for applications which don’t need to overlay any visuals onto the camera stream, but only require the OpenCV data as a form of input.

Let’s start on the C++ side. First, add these files to the dependencies (as shown in the previous part):


Here’s the full Source.cpp which is used to track faces and send their location to Unity. I will not cover the actual OpenCV code – it’s mostly just sample code, and the scope of this tutorial is purely to introduce you to a way of making OpenCV and Unity communicate in an optimized way.

We obviously start with a couple of imports and namespace using statements. Then, we declare a struct: this structure will be used to pass data directly from the unmanaged C++ code into the managed Unity scripts.  This will be covered in more detail once we get to the Unity side of things. The structure is made to suit the application’s needs – you are free to change this as required.

Next up, we have all the methods which can be called from within Unity. Because we are using C++, we need to explicitly tell the compiler how to expose these methods. Normally, the C++ compiler will mangle the method names when packaging them into a .dll. Therefore, we instruct it to use the classic “C” style of signatures, which leaves the method names just as you wrote them. You will always have to use this syntax when exposing C++ methods to a managed application.

Important to note here are the parameters  Circle* outFaces and int& outDetectedFacesCount. The first one is a pointer to a Circle struct, indicating here that we are sending an array of Circles to Detect(). The latter indicates that outDetectedFacesCount is sent by reference.

Compile the project as x64 Release, and copy the resulting .dll to the Assets/Plugins folder in your Unity project. You will also need to copy the OpenCV .dlls to that same folder, as our own .dll depends on those. The OpenCV .dll’s were compiled in the first part, and can be found in \OpenCV 3.1\bin\Release.
It can be a bit tricky to know exactly which .dll’s you need – copying just the ones declared in the #include statements isn’t enough, as these in turn are dependent on other .dll’s. You can use Dependency Walker on our .dll to figure out exactly which .dll’s are required, or if you’re feeling a bit lazy, you can just copy all of the OpenCV .dll’s. If Unity tells you our .dll can’t be loaded even though it’s in the Plugins folder, it’s because dependencies are missing.

A final thing you will need to copy is the cascade classifier .xml. In this sample, I’m using the lbp frontal face cascade – lbp cascades are the significantly faster than haar cascades, though slightly less accurate. You will need to copy it from your OpenCV directory into the working directory of your Unity application – when you’re within the editor, this is the root project directory.

With all the files in place, we can get to the Unity scripts. Create a new script called OpenCVFaceDetection, and copy this underneath the generated class:

The static OpenCVInterop class exposes to C# all the C++ methods we just marked as dllexport. Note that the method signatures have to match. The DllImport attribute takes the file name of your dll.

Underneath that, add this structure declaration. It needs to have the same exact fields as the one declared in C++, in the same order, and it must be marked to have a sequential layout. This way we’ll be able to read the struct data coming from the unmanaged environment.

This is the class itself:

The important bit happens in Update(): in an unsafe block, we call OpenCVInterop.Detect(), and pass the fixed pointer of an array of CvCircle. This means that the C++ OpenCV code will write the detected faces directly into this struct array we defined in C#, without the need for performance heavy copies from unmanaged space into managed space. This is a good trick to know for any C++ interop you may have to do in the future.

Because we don’t know how many faces will be detected, we create the array at a predefined size, and ask our C++ code to tell us how many faces were actually detected using a by ref integer. We also pass the array size to C++ to prevent buffer overflows.

In case you are not familiar with the two above keywords,  unsafe simply allows you to use pointers in C#, and  fixed tells the compiler that the given variable has to stay at its assigned position in memory, and is not allowed to be moved around by the garbage collector – otherwise the C++ code could inadvertently be writing to a different bit of memory entirely, corrupting the application.

This same procedure can be used to pass an array of pixels between OpenCV and Unity without having to copy it, allowing you to display video footage from OpenCV within Unity, or passing a WebcamTexture stream to OpenCV for processing. That is beyond the scope of this part, however.

[2020 edit] To use unsafe, in recent Unity versions (starting from at least 2018) you can now tick Allow unsafe code in the player preferences under “Configuration”, and you can ignore the below info.

[Original post] In order to be able to use unsafe, we need to add a file called “mcs.rsp” to the root asset folder, and add the line “-unsafe” to it. (in versions before 5.5 you may need to use either smcs.rsp for .NET 2.0 subset, or gmcs.rsp for the full .NET 2.0). This file is an instruction to the compiler to allow unsafe code.

While this will let Unity compile your scripts, Visual Studio will still complain when you try to debug with an unsafe block – normally you add a flag in the project properties, but Visual Studio Tools for Unity blocks access to those. To be able to debug, you will have edit the .csproj (root project folder) manually, and set the two  <AllowUnsafeBlocks>false</AllowUnsafeBlocks> lines to true. You will have to do this after every script change since the .csproj is recreated by Unity after every compile, so it’ll be useful to comment out the unsafe lines when you’re working on something else in the project.

In this sample, I’m sending the face viewport positions to a list, to be consumed by other scripts such as this one, which simply moves an object at that position:

That wraps up this part – hopefully I’ve taught you enough to set you on the path. Good luck!

[2020 Edit] Community member Iseta has set up a full Github repo with everything from this tutorial, which should help you get set up even faster!

Leave a Reply

9 Comment threads
15 Thread replies
Most reacted comment
Hottest comment thread
11 Comment authors
Kaveh MalekThomasRaniaKugelfischSpandana Recent comment authors
newest oldest most voted
Notify of

Firstly, thanks for this detailed article.
I got an error in the PositionAtFaceScreenSpace script at the line
“if (OpenCVFaceDetection.NormalizedFacePositions.Count == 0)”
Error msg : Object reference not set to an instance of an object

I have placed both the scripts “PositionAtFaceScreenSpace ” and “OpenCVFaceDetection” as components of same gameobject. Any suggestions as to what is causing this error or how to solve it would be great.

Thanks again

Avinash Singh
Avinash Singh

Could you update this a bit to show how exactly I would be able to perform a function of opencv in Unity. Like, maybe add an empty gameobject and then add the scripts to the gameobject?


I have a video stream from an ESP32 CAM with the output to a URL. Can you provide an example of how I get the URL to a gameobject and use the data in Unity?

Sergio Pulido
Sergio Pulido

Hi Thomas,

Awesome tutorial, thank you so much!

Have you ever tried using Dlib in Unity? I’m struggling using it; I’ve been able to make it work directly in C++, but as soon as I export the DLL, Unity gets stuck on start. I was wondering if it is necessary to generate a DLL for Dlib to make it work in Unity, as when I use it in C++ I use a .lib instead of a .dll.


Thank you. Your demonstration helped me a lot. I had a question though.
In the cpp file for the dll you use “imshow(_windowName, frame);” at line 81. I don’t see any call to waitKey() after that. Usually, in OpenCV we call the waitKey() function to display the image. Why here we don’t need it anymore? Is there an exception if it is being used in a DLL? Thank you again for this post.


Thanks for the tutorial! Everything works perfectly in the unity editor, but when I build the project, it crashes with a runtime error when loading the scene including the .dll import. Do you have any idea how to troubleshoot this?



I have the following error :

[OpenCVFaceDetection] Failed to open camera stream.
UnityEngine.Debug:LogWarningFormat(String, Object[])
OpenCVFaceDetection:Start() (at Assets/Scripts/OpenCVFaceDetection.cs:56)

I have got no camera on my system currently .I believe that is the reason for this error. However from unity forum I got to know a bit about how this is to be run with webcam. Can you tell me if my understanding is right? And if so , I want to send the frames from the camera present in Unity(in future Hololens). Have you got any inputs for me regarding how I could do this?

Thanks for the tutorials!


Hi ! this solution can be used in a WebGL build ?? or just local

Kaveh Malek
Kaveh Malek

Hi Thomas. Thank you for the tutorial. I’ve taken the steps in the Part one (Install) and Part two (Project setup). When I deployed the code in Part three (Passing detection data to Unity) in visual studio (c++) the following massage popped up on my screen: “c:/project2/x64/debug/project2.dll is not a valid Win32 application” (project2 is the name of the code) and the code stopped working.

Could anyone give me help me dealing with the error?