User posts Yuri Kovelenov
11 January 2015 22:42
Попробуйте написать на webgljobs.com. Нужно только программирование, модели уже есть?
11 January 2015 15:06
I agree detecting which rendering botteneck is the culprit would be difficult. I guess I was hoping you guys might have experimented with the ideaAdapting scene rendering quality to performance conditions on the fly - yes this is definitely worth of considering and we will look at it.
Just like in the WebGL Aquarium example, I can imagine how I could have textures dynamically loaded from low to higher quality as long as the fps in under 60fps.Texture resolution can kill performance when you hit video memory limit (especially on mobile). For this reason we support loading of both compressed (for now only DDS/s3tc format is supported) and halved ("min50") textures. These textures should be converted offline with our resource converter.
The next step would be doing the same thing with the meshes. But with outrageous mesh file sizes this wont be as simple. Please let me know If my following thought process is correct:Both options make sense. Of course it is not convenient to program such behavior in an application (Three.js style ). For now we can propose to use a LOD system which is quite effective and is even supported in Blender viewport. This way small distant objects are not rendered at all, while other distant objects are rendered as low-poly with simple textures and materials.
Like your cartoon_interior demo I could load the scene (with low quality meshes) then manually raise the mesh poly count by loading and replacing the meshes with higher-poly ones. The problem here is there would be a delay between loading higher quality models especially if internet speeds are bad(not to mention the drop in performance). The other option would be to load all quality meshes at first, and start with the LO_Q with all the others hidden and call them up by un-hiding them.
The most efficient way is how its done in video games which is with tessellation and displacement maps, unless you have this kind of feature in the works, is there any other way you think this could be done?Unfortunately, WebGL which is based on OpenGL ES 2.0, does not provide hardware-accelerated tesselation capability. The Three.js teapot tesselation demo is implemented, first, in JavaScript (that is on CPU side), and second, the algorithm is teapot-specific, it is hardcoded inside the engine and so cannot be applied in general cases.
Here's some three.js examples:
tessellation
displacement
As consequence of this, displacement techniques will require loading both high-poly meshes and displacement textures, which is even less effective than loading just high-poly meshes.
In Blend4Web, we mimic tesselation for water rendering - its waves are high-poly close to the camera and low-poly at a distance. The same could be applied to terrain rendering when the surface is displaced with heightmaps (however we do not support such technique yet).
Sorry for the long post, I hope I articulated myself adequately [smiling-face-smiling-eyes]We love discussion and original thoughts, so I must thank you for your feedback and suggestions.
11 January 2015 10:39
Obviously some high-end android devices will out-perform some pcs, so detecting for mobile browser isn't the best solution.Yes we observe this very often.
Is there a way to test for device/browser performance before loading a scene, like making a certain amount of drawcalls then timing them? Any number below a certain threshold would be able to load more demanding scenes. Wouldn't this be a better approach to determine whether a device can perform well enough to run a scene?Well the problem is that it is very difficult to detect exact bottlenecks of the rendering pipeline for different cases. Drawcall performance, which depends on number of separately rendered objects (batches), is just one of them. This is so called CPU-bound performance. There is also GPU-bound performance, when FPS depends on number of triangles and on complexity of shaders. Finally, fillrate can be bottleneck if there are too much pixels to render, for example in layered scenes or particle systems, or when you have a high-res monitor.
The things become even more complex because we have different browsers. Browsers can implement different optimization strategies themselves. Of course some of them can be just buggy, and slow for this reason.
I'm trying to come up with a better solution to automatically loading scenes based on system performance, instead of using the detect_mobile Navigator userAgent Property.Agreed, this is not a perfect way to deal with performance. And yes, loading a simpler scene can probably help. For the case when performance is GPU-bound we already have a feature for switching material complexity: LEVELS_OF_QUALITY.
The idea of making benchmarks upon application startup definitely makes sense. For example classical WebGL Aquarium app first starts in low quality (no normal maps, low resolution, little fish) and then progresses to higher quality if FPS is good enough. You can just give it a try.
11 January 2015 09:58
09 January 2015 11:13
09 January 2015 10:48
09 January 2015 10:21
09 January 2015 10:14
In the get_object_by_name(name, data_id) function there is a second argument, using which you can access multiple loaded scene files. Again, look at this canonical example.
09 January 2015 10:01
I found it here https://www.blend4web.com/en/forums/topic/67/
You are not the first person who asks for forum search, we'll think about it.
You are not the first person who asks for forum search, we'll think about it.