Announcement

Collapse
No announcement yet.

Time of SwapChain->Present()

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    Time of SwapChain->Present()

    i already postet the same question on the answerhub but don´t know if this place here is even more suitable, so I'll ask it here as well.

    I'm currently looking for a way, to get timestamp that is as close as possible to the actual appearance of a new actor on the screen. The idea is to take the time of SwapChain->Present() of that very frame, in which my actor will visible on the screen for the first time. The best way would be of course, when I could send a command to the RHI, after I've called SpawnActor() in the game thread, to take the time of Present() of the according frame in which my actor will be visible on the screen. How could I achieve something like that and how difficult would it be (I'm not a programmer)?

    However, since this is not for a game but for a neuroscientific experiment, I would also be happy with a less elegant but easier way to achieve this. One idea is to take the time of Present() on every time. When I then know the time of SpawnActor() I could figure out, when the according present() was called. But this would only work if I always know, how many frames are between my game thread and the RHI thread. Can I be sure, that there is always one frame between SpawnActor() and the according Present(),so that I have to look for the second Present() after SpawnActor()? Or if I disable r.OneFrameThreadLag, can I be sure that the very next Present() after SpawnActor() is what I'm looking for?

    Edit.: In case that it matters, I'm using directx11 on windows 10.


    #2
    Assuming this is for some sort of reaction timing experiment, you'll get more accurate and repeatable results by recording the events directly (actor appearing on screen, human response to camera) with a high speed camera recording the screen and the human, and then counting the frames between actor appear and human response to determine the human response time. Your measuring error will also be known (based on the camera capture rate)

    Otherwise, you're going to have uncontrolled variables in frame buffering (how many are buffered, which you may be able to figure out) and also in presenting a frame to the screen (which varies from screen to screen and has historically been pretty bad in TV displays).

    Comment


      #3
      Thanks for your response.
      You are right, that this is about reaction time experiments. Unfortunately, the idea with a high speed camera will not work, since I'm working on a VR project, though I have no way to capture the VR display with an external camera during the experiment.
      However, the time of the actual response is not that difficult to measure and can be solved with a keyboard hook (some of the "traditional" tools for visual stimulation like PsychoPy do exactly the same to get a framerate independent response). But the exact time of the stimulus onset on the screen is more tricky and this is what I'm working on right now, especially the issue with frame buffers and how to control them. But so far my preliminary results are not that bad!

      You are also right about the issue with the timing of the screens, but thats actually not that important to me since this is a problem with visual stimulation on screens and reaction time measures in general. Furthermore, we have screens with a relatively good timing that are not that far away from CRT screens (which usually have a really good timing behavior) and especially VR headsets (I'm using the Vive) are using with OLED displays and OLED displays show a much better timing behavior than normal computer screens.
      Last edited by Alphatierchen; 02-06-2019, 09:16 AM.

      Comment

      Working...
      X