User posts Evgeny Rodygin
19 November 2014 10:57
Hi,
As described in this tutorial you can do it with an iframe html element.
It should look like this:
Width and heigth are desired size in pixels.
allowfullscreen is optional.
As described in this tutorial you can do it with an iframe html element.
It should look like this:
<iframe width="800" height="500" allowfullscreen src="path_to_exported_file.html"></iframe>
Width and heigth are desired size in pixels.
allowfullscreen is optional.
19 November 2014 10:51
Hi,
You are right about the security issues but anyway you can control a camera with a keyboard. You need to click on the control element (a gear) inside the canvas. Then you can move with WASD. Some kind of hack but it works
Eye camera movement is not supported for mobile devices yet. Probably we'll add this functionality later. It'll work similar to the controls described in this tutorial.
Later this month we will add a new camera control type "Hover" and an optional "panning" control to the "Target" camera. These ones will be much more comfortable for mobile devices.
You are right about the security issues but anyway you can control a camera with a keyboard. You need to click on the control element (a gear) inside the canvas. Then you can move with WASD. Some kind of hack but it works
Eye camera movement is not supported for mobile devices yet. Probably we'll add this functionality later. It'll work similar to the controls described in this tutorial.
Later this month we will add a new camera control type "Hover" and an optional "panning" control to the "Target" camera. These ones will be much more comfortable for mobile devices.
14 November 2014 14:55
Hi b0,
For now the only possibility to use texture from outside exported scene is to dynamically load another exported json (for example material library) and inherit material from some object.
It's not really comfortable way so in 14.11 we'll introduce canvas textures which will perfectly suit your needs. They are used for dynamic images/videos/GIFs upload. It will be possible to even type some text on top of them from API.
So it's better to wait till the end of this month I think.
For now the only possibility to use texture from outside exported scene is to dynamically load another exported json (for example material library) and inherit material from some object.
It's not really comfortable way so in 14.11 we'll introduce canvas textures which will perfectly suit your needs. They are used for dynamic images/videos/GIFs upload. It will be possible to even type some text on top of them from API.
So it's better to wait till the end of this month I think.
13 November 2014 10:32
Hi Miguel,
This feature has already been requested and we agree that it is really important one.
In the next release we are adding "Hover" camera, which will act like the one in the most strategy games.
As of panning the target camera. Probably this functionality will also be available later this month in Blend4Web 14.11.
This feature has already been requested and we agree that it is really important one.
In the next release we are adding "Hover" camera, which will act like the one in the most strategy games.
As of panning the target camera. Probably this functionality will also be available later this month in Blend4Web 14.11.
10 November 2014 19:12
Hi duarteframos,
Thanks for kind words
If I understood you correctly you want to move camera target location. We have such a possibility in our Viewer application. If object is selectable you can press Z to move view to it's location.
Also we are planning to add Hover camera type. It will be some kind of strategic games camera. It will be pretty friendly compared to Eye.
But anyway we consider adding suggested functionality to Web Player.
Thanks for kind words
If I understood you correctly you want to move camera target location. We have such a possibility in our Viewer application. If object is selectable you can press Z to move view to it's location.
Also we are planning to add Hover camera type. It will be some kind of strategic games camera. It will be pretty friendly compared to Eye.
But anyway we consider adding suggested functionality to Web Player.
Also I'll just leave this here if you ever want to further improving visual quality of the web Rendering engine in the future: SSDO - Screen Space Directional Occlusion https://people.mpi-inf.mpg.de/~ritschel/Papers/SSDO.pdfThanks for the link. We always searching for ways to improve visual quality.
09 November 2014 22:33
Hi Cedric,
Welcome to the forum!
We have a good article on this task.
First method is the most suitable for you I think.
Btw. Nice site
Welcome to the forum!
We have a good article on this task.
First method is the most suitable for you I think.
Btw. Nice site
05 November 2014 11:06
Ответ на сообщение пользователя julper
Hey guys, after playing around with the NLA I found it is pretty easy to add simple interaction without any programming.
But now I have another question, is there any way to allow the user to navigate between different scenes?
I'll try to make it clearer; let's say I created 5 scenes, and each one of them is a room of a house. If I export each of these scenes as a separate html file, can I add a connection between them? For example, if the user clicks or selects the kitchen door can it be taken to the html file of the living room scene?
If it cannot be done by having multiple html files, could it be done by having one file with different scenes…?
I hope the question is clear enough hehe.
Hi julper,
NLA scripting is not that powerful yet. Therefore you problem can't be solved without programming.
Our resources upload API is pretty friendly, so probably it's a good time to learn a bit of JavaScript
30 October 2014 14:40
Ответ на сообщение пользователя HTML5There is really a problem exist. Normal maps crush with Generated texture coordinates which were recently supported by Blend4Web. Solution is to use UV maps for Normal Mapping. In the next release we will fix this.
Hi Evgeny,
Fast reply, amazing! Thanks!
I was using Blender 2.71 and Addon 14.09 (Mac OS 10.10), and the errors occured. So I updated to the lasted version, and the errors remain. I will send you the blend file later, FYI.
Another experiment: In order to figure out the problem, I tried using only one sphere and add some texture, then the error occured, either exporting error or runtime error.
Ps, I tried the demos in the SDK, all works well. Really wired. Waiting for your help!
As for the other messages from the console, you can fix them by removing Procedural generated textures (CLOUDS, STUCCI etc.), and by using Power of Two textures: 16x16, 128x128, and so on.
Thank you for pointing out the error.
30 October 2014 12:10
Hi, and welcome to our forum.
In the above report the only real error is in the last string:
All the others are warnings being described in our documentation.
First thing - you have exported your blend with Release Candidate version of our addon which may be unstable.
I would recommend downloading a new version and trying it.
If an error will occur ones more, please send your blend file to my email evgeny-ar@blend4web.com.
In the above report the only real error is in the last string:
Uncaught TypeError: Cannot read property 'offset' of undefined
All the others are warnings being described in our documentation.
First thing - you have exported your blend with Release Candidate version of our addon which may be unstable.
I would recommend downloading a new version and trying it.
If an error will occur ones more, please send your blend file to my email evgeny-ar@blend4web.com.
26 October 2014 15:53
Ответ на сообщение пользователя -Vampire-Да. Её действительно нет, потому что это часть открытой библиотеки gl-matrix. С её API можно ознакомиться на оффициальном сайте разработчика. Но, если возникают трудности, то, возможно, и правда стоит добавить её.
Заметил, что нет документации по модулю quat
Ответ на сообщение пользователя -Vampire-controls.create_mouse_click_sensor() и controls.create_mouse_move_sensor() сами по себе "ничего не знают" об объектах на сцене. Чтобы реализовать описанное вами поведение, эти сенсоры следует использовать совместно с функцией из модуля scenes scenes.pick_object(x, y), которая пытается по задданым на экране координатам получить объект.
Суть в том, что callback вызывается при любом движении мышки, независимого от того провожу я мышкой над мешем или нет
Так же нужно не забыть в Blender-е поставить на нужных объектах опцию Selectable