|
Sweet Home 3D Forum » List all forums » » Forum: Features use and tips » » » Thread: Question About Photo Rendering » » » » Post: Re: Question About Photo Rendering |
Print at Feb 6, 2026, 4:09:25 PM |
| Posted by mazoola at Nov 25, 2015, 9:12:13 AM |
|
Re: Question About Photo Rendering I've not studied the SunFlow code -- and the project evidently died out before much, if any, technical documentation was created, but typically the biggest factors determining how long an image takes to render are the number of light sources, the number of transparent/translucent objects, and, to a lesser extent, the number of reflective objects in a scene. (Assuming all other settings remain the same, that is. Should you decide, say, to use the advanced photo processing plugin and activate caustics in your final scene, you could easily increase your rendering times by a factor of 10 to 100 -- probably even more.] To render an image, for each pixel a pathway (vector) first has to be plotted from the location of the virtual camera lens in the 'direction' of that pixel until a virtual object is encountered. Once this initial target is detected, additional vectors are then plotted from the point where the original vector met object to each of the light sources in the scene. The color and amount of light returned is then calculated based upon each source's potential contribution to the total. Once all those calculations are complete, a value is stored for the pixel, and the process begins again. In some cases, additional rays must be cast beyond simply one per light source per pixel -- for instance, if the object encountered by the original cast ray is translucent/transparent or reflective. In the first instance, rays may need to be traced from the object to each light source in order to determine the contribution made by the object's specular, ambient, and/or diffuse reflectivity. Then, an additional ray is shot 'through' the object -- typically on a path that diverges from the original vector from camera to object, depending on the object's degree of diffraction. This second ray continues until it, too, encounters an object -- at which point the entire process may repeat, depending on the desired amount of recursion set in the program. Finally, though, once all of the vector definitions and calculations triggered by the initial ray are complete, a final color value is assigned to the pixel. Note this can be a very processing-intensive calculation to make. For instance, if your scene contains a red rose inside a blue-tinted bell jar, pixels representing the rose could be calculated based upon (a) the amount and color of the light source reflecting from the bell jar plus (b) the amount and color of the light source reflecting from the surface of the rose (c) diminished by the degree of opacity of the bell jar and (d) color-shifted by the intensity of tint in the bell jar's glass, with (e) the precise area of the rose used in calculation 'b' determined by the glass's degree of refraction -- (f) times the number of unobstructed light sources in range. It appears the living/dining room layout on your first post contains at least 25 light sources -- not counting the sun. In contrast, assuming it was generated using only the 'sun,' the last image above contains only a single light source; all things else being equal, it would require only 4% as many calculations to render. There are numerous ways one can attempt to accelerate the render process -- many of which appear to be implemented in SunFlow by default. (For instance, when you first load a plan file, you may notice all of your furniture items in the 3D window initially display as groupings of featureless white blocks. These are the 'bounding blocks' defined for each object to accelerate collision detection between objects and rays. As you might imagine, complex objects require complex equations to model them -- so if you're shooting rays for a lot of light sources in a room full of complex items, the system is going to crawl. However, if you define a simple, rectangular bounding block *containing* the item, and you conduct your initital collision text against the bounding block, and only model the full item once a hit on the block itself is detected, the greater simplicity of modeling the block more than offsets the cost of the occasional doubled pic. As I said, SunFlow seems to contain a number of accelerators and simplifiers as default, so your image is likely running about as optimally as can be expected. The easiest way to speed things up is to cut back on the number of lights you use, with lesser improvements possible from cutting back on highly reflective and transparent/translucent surfaces. Keep in mind that "lights" means "anywhere, on any level" -- if your basement lights are on while you're trying to render a view of the penthouse, each pixel mapped will waste a few microseconds to calculate whether or not the basement lights shine on it. (I've taken to placing lights on their own levels, one level per floor, and making unused levels not visible.) |
|
|
Current timezone is GMT Feb 6, 2026, 4:09:25 PM |