Logical Computer Actors.

Simply put this is what I plan to do in c++ as my getting started project to understand actor-actor, actor-component interaction.

I want to create three actors.

  1. The Input Data Actor. The Data should be converted to binary (byte size) data chunks in the actor and be pulled by the processing actor.
  2. The processing actor should just simply process the binary (byte) data, that it pulls from the input device, not relevant for the question how the processing would work, and push it onward to the ouput device.
  3. The output device, should hold some kind of texture or material that it can write to based on the data pushed from the processing actor to act as a in game screen.

I have concluded the following. There should be three actors named (AStorage) Storage, (AScreen) Screen and (AComputer) Computer.
Each should hold a Static Mesh Component to be a visible object in game.

For the Input Data Actor for the futures sake I need to make it toggleable, if it’s on the data should be pullable by the processing actor.

class AStorage
{
UPROPERTY
bool On; //on or off

UPROPERTY
TArray<uint8> data; //data buffer.
}

class AInput
{
UPROPERTY
bool On; //on or off

OnTick()
I guess all the processing would go into tick with a obvious check if we are on or off, probly some throttle that would limit computations/sec.
The processing unit is toggleable, once it turns on it will pull the data from the input device(if the device is on), and process it (later logicaly process and compute it) but now it should simply just write to a boolean buffer of a fixed size and push it to the screen.

UPROPERTY
bool displayData[64x32];
}

class AScreen
{
UPROPERTY
UTexture* ScreenDisplay;
This is the texture where we will display the output.

“Event/function call”
void processInput(bool* displayData);
loop trough the boolean data and write to the coresponding texture pixel coord, black or white.
}

The questionmarks I am trying to figure out, are the following. If I want to hook up the connecting pieces trough blueprint once the code is done, would I would define that input, output as predefined UPROPERTIES? What do I need to think of? Maybe it’s bad in the design to depend on the actuall classes, maybe i should break it down, and just have defined ports/sockets intead of the objects? The problem here then is how the screen would know how to update. I currently doesn’t desire for it to process and update it’s texture every frame, only when the data is provided.

UStruct DisplayPort
{
bool displayData[64x32];
}

UStruct DataPort
{
TArray<uint8> data;
}

When it comes to the texture I just expect there are clear examples how to manipulate texture data on the site, and I just need a material for screen that takes said texture as an input or are there any other pitfalls here that I might miss?

Ok I got temporary graphical components showing. And I have a deviceport struct on all the objects, the computer got two since it’s connected to the screen and the storage object.

next step is adding a tick, and see if we can pull anything from the storage.

The question is, if that, if I have one port on the storage that holds data and set the port on the computer to be set to the port from the storage trough blueprint.
Will they share the same port. Or will the computer take take a copy of the port from the Storage. The port is of type UStruct.

The data is now feed from the storage into the computer and to the screen.
Next step outputing the data to a texture.

d76b7bf902ffde4ff846ab82db7c31ee550925c9.jpeg

Some progress.

To quickly solve my issues with textures being in unexpected formats and so forth, since that is not part of the current focus.
I will make a mechanical screen. It will have a white background with 32*64 instances of a black panel. where the panel is flipped around it’s origo 90 degrees when the pixel is active or back shut when it’s not.
I can’t imagine this to be a performance issue.

What do you actually want the screen to do? You might want to use a UMG Widget actor for that part of it if you are only wanting to write text (i.e. like a console screen). If you are planning on doing graphics you can obviously use your texture method (or you could do both and have an image UMG widget behind a text umg widget on the same actor).

The problem with my texture solution is that I can’t figure out/ get a consistent behaviour of of the textures when it comes to the pixel data. Works great for random values, but a complete white screen turns into a Green and black row screen. Sometimes the middle of the texture is black. Actually i just need 32*64 pixels that can be toggled on and off. The actuall real magic is in the processor. Who will be a full fledged VM running chip-8 byte code and produce the output for the screen.

So I want the screen to toggle pixels on and off based on input.

Ok I got the screen rolling with feed in data.
I will add it as soon as it’s uploaded to tze tube.

The processor takes byte data, see if the current byte is even or odd and writes the result to the screen buffer. Once the entire screen has been written. the processor will start again but one index ahead of the previous data.

This is just a concept to make sure that the data supplied to the screen actually behaves, and that the processor actually process the data and can provide it to the screen.

Now I can start working on the actuall VM.

5 byteops defined. 30 to go.

All byte ops are now defined, the vm is about to be transfered into unreal and then the. TSM-C8-LCA1 will see the light of day.
After that it’s time to debug the vm and the output. :slight_smile: movie coming.

The vm runs trough it’s memory, reading memory, writing to registers and writing to screen where it is requested.

Still not the most intesresting clip, but all screen updates are a result of the vm processing bytecode.

Next step is to implemented the 16 key keyboard, and load a game of tetris.

Tetris and Space Invaders.