Android Launch/Package Fails (How do I cross compile openCv with Unreal?)

[EDIT] Learned a few things while this post was being approved.

  1. It’s failing because I have to cross compile for both x64 for Unreal and Java for Android
  2. Android relies on .so and .a files to see the OpenCV assets

This refines my question a bit:
[UPDATED QUESTION] How do I import the appropriate OpenCV assets from the 3.2 OpenCV Android install? I know I can import .so files using

            PublicAdditionalLibraries.Add(Path.Combine(ThirdPartyPath, "/MarkersDetector/Android/"));

But how do I tell which OpenCV so’s to add compared to the DLLs / Libs? They don’t map 1:1.
How do I import .a files?
Can I just import all of OpenCV for my usecase or is there a recommended approach here?


I’m trying to integrate OpenCV for Android and I’ve followed a few tutorials now but I still run into the same blocker everytime. The issue appears to be in the automationTool for the android build it can never find the includes and libs to build successfully and it doesn’t provide much information about why not.

I get the VS build working correctly but then I go to launch to Android and this happens

LogPlayLevel: UnrealBuildTool: D:/Unreal Projects/Russian/Source/Russian/WebcamReader.cpp:36: error: undefined reference to 'MarkersDetector::MarkersDetector(std::map<int, std::array<float, 3u>, std::less<int>, std::allocator<std:Pair<int const, std::array<float, 3u> > > >*, std::array<double, 9u>, std::array<double, 8u>, int)'
LogPlayLevel: UnrealBuildTool: D:/Unreal Projects/Russian/Source/Russian/WebcamReader.cpp:44: error: undefined reference to 'MarkersDetector::captureCamera(int, int, int)'
LogPlayLevel: UnrealBuildTool: D:/Unreal Projects/Russian/Source/Russian/WebcamReader.cpp:160: error: undefined reference to 'MarkersDetector::update(std::vector<unsigned char, std::allocator<unsigned char> >&, std::array<float, 3u>&, std::array<float, 3u>&, int&)'
LogPlayLevel: UnrealBuildTool: D:/Unreal Projects/Russian/Source/Russian/WebcamReader.cpp:184: error: undefined reference to 'MarkersDetector::releaseCamera()'

This is what my build.cs looks like I’ve tried a few things and right now I’m doing absolute paths and relative paths…

public Russian(TargetInfo Target)
string ThirdPartyPath = Path.GetFullPath(Path.Combine(ModulePath, "../../ThirdParty/"));

PublicAdditionalLibraries.Add(Path.Combine("D:/Unreal Projects/Russian/ThirdParty/MarkersDetector/Win64/MarkersDetector.lib"));

PublicAdditionalLibraries.Add(Path.Combine(ThirdPartyPath, "/MarkersDetector/Win64/MarkersDetector.lib"));

PublicIncludePaths.Add(Path.Combine("D:/Unreal Projects/Russian/ThirdParty/MarkersDetector/Includes"));
PublicIncludePaths.Add(Path.Combine("D:/Unreal Projects/Russian/ThirdParty/MarkersDetector/Includes/MarkersDetector.h"));
PublicIncludePaths.Add(Path.Combine(ThirdPartyPath, "/MarkersDetector/Includes"));
PublicIncludePaths.Add(Path.Combine(ThirdPartyPath, "/MarkersDetector/Includes/MarkersDetector.h"));

PublicDependencyModuleNames.AddRange(new string] { "Core", "CoreUObject", "Engine", "InputCore", "RHI", "RenderCore", "ShaderCore" });

PrivateDependencyModuleNames.AddRange(new string] { });
// Uncomment if you are using Slate UI
// PrivateDependencyModuleNames.AddRange(new string] { "Slate", "SlateCore" });

// Uncomment if you are using online features
// PrivateDependencyModuleNames.Add("OnlineSubsystem");

// To include OnlineSubsystemSteam, add it to the plugins section in your uproject file with the Enabled attribute set to true

In the file webcam.cpp I have #pragma comment(lib, “MarkersDetector.lib”) to attempt to access the lib

What am I doing wrong here?

I had the same exact problem following this tutorial:…nreal_Engine_4
In this case everything was fine until I started trying to use code in opencv and then the android compiler would fail to even find the headers

Right now I’m trying a google translate of this Russian tutorial (it translates pretty well):…al-engine.html
Translation here:

getId () - blog about programming

August 28, 2015
How to connect MarkersDetector.dll to the Unreal Engine 4 project
This post describes how to connect my MarkersDetector.dll library, which uses several OpenCV library calls. I had to build my own dll, instead of using OpenCV directly, because of the feature of the garbage collector in UE4 - it crashed when calling some necessary methods, for example, cv :: findContours (). MarkersDetector.dll packs the necessary minimum to get the real-time coordinates and rotation vector of the webcam relative to static markers.

It is assumed that you have configured and running a bunch of UE 4.8.3 (4.9) and VS 2013 on Win64. And also downloaded files MarkersDetector.h, MarkersDetectorDll.dll, MarkersDetecorDll.lib, WebcamReader.h, WebcamReader.cpp and header files opencv: link to the archive.

Update: A ready-made project under UE 4.9 can be downloaded here.
View the source MarkersDetector, ready for the build for Android, you can on Github.

Step 1. Create a project in UE4
Step is optional. You can also integrate MarkersDetector into a previously created project.
For a new project, run UE4, select the menu item to create a new C ++ project. The template for the project can be any. After creation we open VS, we try to collect and start therefrom the new project. If everything is good, move on.

Step 2. Add dependencies to the project
In the root directory of the project, we create a new folder called ThirdParty, we create the MarkersDetector folder in it, it has two folders, Includes and Win64. In the Includes folder we put my MarkersDetector.h and the folder opencv2, in Win64 - MarkersDetector.lib:

    ThirdParty / MarkersDetector / Includes / MarkersDetector.h
    ThirdParty / MarkersDetector / Includes / opencv2 / *
    ThirdParty / MarkersDetector / Win64 / MarkersDetectorDll.lib

Now you need to connect these files to the project. Find the file Source /% project_name% /% project_name% .build.cs and add.
To the begining:

    using System.IO;

In the constructor:

    string ModulePath = Path.GetDirectoryName (RulesCompiler.GetModuleFilename (this.GetType (). Name));
    string ThirdPartyPath = Path.GetFullPath (Path.Combine (ModulePath, "../../ThirdParty/"));

    PublicAdditionalLibraries.Add (Path.Combine (ThirdPartyPath, "MarkersDetector / Win64 / MarkersDetectorDll.lib"));
    PublicIncludePaths.Add (Path.Combine (ThirdPartyPath, "MarkersDetector / Includes"));

In PublicDependencyModuleNames:

    PublicDependencyModuleNames.AddRange (new string ] {"Core", "CoreUObject", "Engine", "InputCore", "RHI", "RenderCore", "ShaderCore"};

Now you need to add MarkersDetectorDll.dll to the folder where the compiled executable project file is located: x64 / Release /. If after the successful assembly at the startup stage, the editor will swear, it means that the dll is not there.
It is worth trying to collect the project. If successful, then you can go further.

Step 3. Add the WebcamReader class from which the Blueprint will be created
With the editor off, copy the WebcamReader.h and WebcamReader.cpp files to the project folder. Add them to the project in VS. To do this, you can again copy these files, but from the project folder and paste it into the Project Explorer in VS.
WebcamReader.h connects Markers.Detector.h, so now all the dependencies will be used. In WebcamReader.cpp, change the first include to the corresponding name of your project.
Now you must collect the project and run it.


Step 4. Add a Blueprint based on WebcamReader.
In the running UE4-editor of your project in the C ++ Classes list, WebCamReader should be found. Select it and, using the right mouse button, bring up the context menu. There should be an item to create a Blueprint based on this class - select this. You can name the file BP_WebCamReader. The name does not matter.
Now you need to configure this BP: add a mesh, mesh material, material texture as an option, which will be updated when receiving frames from the camera. The update calls the BP-event OnNextVideoFrame. We will use it.

First, you need to create a material, call it M_Webcam, open, in the editor, create the node TextureAsParam2D from the Emissive Color pin of the material. The parameter to rename to Texture (via the name of this parameter will update the value). As the UV of this texture, you can select any default texture. After that, save the material and close it.

Now we need to return to the BP created earlier. Go into it and add any mesh: cube or plane. We need to give a variable to this mesh. Let it be Billboard. Now you need to go to the main editor BP Event Graph and from the Even Begin Play node create the node Create Dynamic Material Instance. As a target, specify our mesh, and as the Source Material, select the previously created M_Webcam material. Now you need to create a variable of type Dynamic Material Instance and name it DynamicMaterial. From the created node from the Return Value, draw and create the node Set DynamicMaterial, that is, assign the value of the DynamicMaterial variable to an instance of our dynamic material. Now you need to assign the same result (the dynamic material instance) to the mesh via the SetMaterial node.

Now you need to configure the texture update. From the Event OnNextVideoFrame node, draw and create the node Set Texture Parameter Value, in Target specify the DynamicMaterial variable we created, in the Parameter Name enter "Texture" and Value specify Video Texture - this is the internal variable BP obtained from the WebCamReader class.


Step 5. Obtaining the location and rotation vector of the camera
After the texture update, the location and rotation data are in two variables available from BP: CameraLocation and CameraRotation. Using Event Graph, their values ​​can be assigned to different objects or at least displayed.

Calibrating the camera and setting the location of the markers
The necessary utilities are in a separate archive.

You need to use Calibration.exe to start the camera calibration procedure. During calibration, images of the chessboard are used. You can use the photo, recorded video or direct video stream from the camera. The utility describes startup parameters at startup. To start the calibration quickly from the webcam video stream, you can run calibration.bat. If the calibration is successful, a camera.xml file will be created, in which you can find the values ​​of the two matrices: camera_matrix and distortion_coefficients. Their values ​​should be added to the WebcamReader.cpp code in the appropriate variables in the constructor.

Since MarkersDetector determines the location of the camera by markers, it is necessary to indicate in advance the marker's id and its location relative to the coordinate system of the room.
To generate a marker image using the Hamming code, use the MakeMarker.exe utility. The first argument specifies the token id: from 0000000000 to 1111111111. The second argument is optional - this is the name of the file in which the marker image will be written (default is the marker's id). Id can be chosen not in order. The number of markers to be printed depends on the room. For testing, at least one is enough. The resulting marker images should be printed on the sheet so that each marker on its sheet remains square. The side length of all squares must be the same.

Printed markers should be placed indoors and their coordinates recorded in the coordinate system of this room, for example, from the corner. You can use any unit, for example, centimeters. The coordinates of the markers should be added to the code of the program in WebcamReader.cpp in the variable markersLocations in the format id: loc. By analogy with already recorded. The leading zeros of the id are not written. You should also pay attention to the value of markerHalfSize. In the code it is indicated 10. This is half the width of the square of your printed marker in the same unit of measurement that was used to record the location of the markers.

But both have the same issue of not explaining how to get around this Android builds issue.

I’ve read the article on linking static libs where it mentions something about compiling as a DLL however I can’t do that in VS without losing the ability to compile altogether, I’ve been using the compilation configuration that came by default of makefile.

Ultimately all I want is the ability to run OpenCV, write some C++ to extend a blue print with an x y diff of facial movement. I can’t get past this library step though.

I have a a lot of errors when I compile for PS4 all like this: Project\Intermediate\Build\PS4\Project\Development\OnlineSubsystemSteam\Module.OnlineSubsystemSteam.1_of_2.cpp : error : L0039: reference to undefined symbol `SteamInternal_ContextInit’ in file “D:\Project\Intermediate\Build\PS4\Project\Development\OnlineSubsystemSteam\Module.OnlineSubsystemSteam.1_of_2.cpp.o”

I hope we can find a solution to work around.

To deploy thirdparty libraries on Android you need to create and xml using APL. This file has the information to configure and copy the needed files during the packing process. To add this file to the build.cs:

AdditionalPropertiesForReceipt.Add(new ReceiptProperty("AndroidPlugin", Path-toAPL-file));

The name of the library in the **PublicAdditionalLibraries **function must not include the lib prefix or .so extension

PublicAdditionalLibraries.Add(Path.Combine(ThirdPartyPath, "/MarkersDetector/Android/markersdetector"));

Take a look to this tutorial: