Forum Moderators: nerd, RedPhantom
(Last Updated: 2024 Nov 24 5:17 pm)
The camera/monitor prop should have a separate material zone that simulates a screen for images. If one exists and has an associated image attached to it then you need to determine what type of image was created by the vendor. In that case, you would need to duplicate the dimensional sizing and orientation of your new map for display. For a simple display, it is easy. However, if you want to simulate glassy reflections along with the display, then the shader arrangement will be more involved.
If your scene has 3D.objects representing cameras, you can even create new dolly cameras and attach them to each of your CCTV devices, that way, when somethings happens in your scene, you only have to render using one of the dedicated cams, and then use that render on the corresponding monitor
𝒫𝒽𝓎𝓁
(っ◔◡◔)っ
👿 Win11 on i9-13900K@5GHz, 64GB, RoG Strix B760F Gamng, Asus Tuf Gaming RTX 4070 OC Edition, 1 TB SSD, 6+4+8TB HD
👿 Mac Mini M2, Sonoma 14.6.1, 16GB, 500GB SSD
👿 Nas 10TB
👿 Poser 13 and soon 14 ❤️
The camera and cctv monitors are my own models, applying an image the the screen isnt a problem, and I already have camera objects parented to the cameras. What I was hoping for, is a way to render the view of each camera and assign that to an input in the materials room, like where you have the option for an image, or video file, have an option for 'Render from Camera X' or something like that. Done 100% in program, I dont want to render CCTVCAM_A, B, C, D, E, F, G and H individually, then apply all of those images to my monitors. What if I wanted to produce an animated clip with physics? each shot could end up looking different if they're not all rendered in the same single pass.
To answer your question simply, not directly in the ui. What you are asking about it called using a Render Texture. Many game engines have this option that can do it directly without compositing. Poser doesn't know how to do this directly in the ui and using the camera render output as a material within the material room, which means it has to do it the compositing route.
This all has to be set manually in Poser, or scripted. Basically the scene will have to be rendered for all of the additional cameras that power screens in the scene, then the intended images placed on the screens within the scene for the final render. Each of the cameras will have to render out the scene by itself first, saving the renders sequentially in their own folder(s). Then apply the texture(s) to the screens using a movie node with the sequentially saved renders to do the final rendering.
This could be done with scripts, but it is still going to render out everything and all the script will do is automate putting it together for the final renders. Everything would have to be internally named properly, saved to individual folders and sequentially, etc...
I don't recall anyone ever asking for this in Poser, and that doesn't mean that you could not ask for the option either. It would still have to render out everything unless the cameras and screens were all along the same line. Game engines also do this by rendering out everything separately if more than one camera is used in the scene. They do it one final frame at a time, for every frame that uses it, for obvious reasons thou.
Some things are easy to explain, other things are not........ <- Store -> <-Freebies->
Hi, quite interesting thread ! Not an easy one because having an image map getting out of the camera, could seems to be quite simple to add to Poser. Rethinking it a bit there would be the video Larsen problem if one of the monitoring screen shows up in one of the other cameras... So the render engine should determine the order of rendering and have a limit to the recusivity to avoid infinite loop !
Could be very very funny to have this function !
This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.
Not sure if this has been asked before, I couldnt find it if it has, but I'm trying to do a render in which someone is taking a picture of her own in scene.
What I'm doing at the moment is rendering the POV of her camera with a dolly cam I've attached to her phone, saving that render, then assigning it to the screen material of the phone for the wider shot I actually want. So my question is, to streamline things a little, is there a way to assign a camera to be a material? I've poked around in the materials room and found a few things that looked promising, but got nowhere with them. Anyone have any ideas, or is the way I'm already doing it all that there is?
I'm hoping it isnt, as in an upcoming series of renders I kinda want to do a mall security room kinda thing with multiple CCTV cameras that will have changing scenes in each shot, and I'd very much prefer to just set up cameras and feed their view into a material to put onto the monitors, over rendering every shot individually and then assigning each monitor's display.
And if thats not possible, lets see that in Poser14 huh? ^_^'