User posts Bryan Irwin
08 May 2015 22:29
Great!
Here is my progress thus far:
www.streamfall.com/Demo/campfire.html
Eventually I am hoping to make this page into an interactive homepage for my small dev studio.
two questions :
If I plan to work towards adding buttons and such, would it be optimal to continue trying to do so in the Blender project? I do find the engine to be a comfortable place, and constraints and such do make it convenient. Or, should I work towards trying to do so with one of the json export options as a template?
I'll need to be able to extend the current project. I suppose I'll need to do that using Py, and figure out how to interface Py classes with Blender. I've not done that previously, so I suppose I'll need to look at standard Blender3d coding tutorials. Is that assumption correct? I see there are options for visual scripting..
Thank you for your help!
Here is my progress thus far:
www.streamfall.com/Demo/campfire.html
Eventually I am hoping to make this page into an interactive homepage for my small dev studio.
two questions :
Thank you for your help!
Looking forward to going full 3d.
07 May 2015 18:18
07 May 2015 18:14
Fantastic, thank you for your very rapid reply.
In my haste, I looked around at more example projects, and it looks like the Flag cloth simulation : does pretty much everything I need in regards to constraints.
Interestingly, those constraints are established within Blender, instead of in the code.
Confusingly, there seems to be far less JS code in this example. It seems as if the touch code is compiled and somewhat obfuscated to within the html.
That leads me to another question. I see that the JS code for touches is within the flag_caches_mix.html. Where might that have been compiled from? Presumably, the programmer responsible did not write the code in this way.
Thanks. Eager to learn and work towards more ends with this SDK!
In my haste, I looked around at more example projects, and it looks like the Flag cloth simulation : does pretty much everything I need in regards to constraints.
Interestingly, those constraints are established within Blender, instead of in the code.
Confusingly, there seems to be far less JS code in this example. It seems as if the touch code is compiled and somewhat obfuscated to within the html.
That leads me to another question. I see that the JS code for touches is within the flag_caches_mix.html. Where might that have been compiled from? Presumably, the programmer responsible did not write the code in this way.
Thanks. Eager to learn and work towards more ends with this SDK!
Looking forward to going full 3d.
07 May 2015 03:24
Hello,
I'm a Unity3d .Net developer, attempting the dive into this new and really fantastic technology. I'm thrilled by how well the demos look across browsers and platforms - as it outperforms the new WebGL build by a long shot (so far).
I'm attempting a personal project which works similarly to the solar system application, and now that I have spent a few days trying to work out the system (without much progress), I thought I would seek the community's help.
I have inferred that by adding modules such as "controls" that I would have access (similarly to app.js) to things like touch and mouse input. However, I've been trying to hack something together and have so far failed.
What I would first like to do is just console.log() a message indicating that a touch occurred. From there, I'll be editing the existing control scheme to clamp or constrain the camera on some axis.
JS is not my strong suit, especially with web stack.
Have a little time? I'd love a bit of guidance.
Thanks!
I'm a Unity3d .Net developer, attempting the dive into this new and really fantastic technology. I'm thrilled by how well the demos look across browsers and platforms - as it outperforms the new WebGL build by a long shot (so far).
I'm attempting a personal project which works similarly to the solar system application, and now that I have spent a few days trying to work out the system (without much progress), I thought I would seek the community's help.
I have inferred that by adding modules such as "controls" that I would have access (similarly to app.js) to things like touch and mouse input. However, I've been trying to hack something together and have so far failed.
What I would first like to do is just console.log() a message indicating that a touch occurred. From there, I'll be editing the existing control scheme to clamp or constrain the camera on some axis.
JS is not my strong suit, especially with web stack.
Have a little time? I'd love a bit of guidance.
Thanks!
Looking forward to going full 3d.