Here’s a video from 2015 showing the advantages of letting the GPU do some of the heavy lifting.
Here’s a video from 2015 showing the advantages of letting the GPU do some of the heavy lifting.
Youtubers are disappointed with Ryzen. They expected it to crush Intel in every single benchmark, and I had hoped that it would too. What was I thinking?
The problem that AMD has is that a lot of people seem think that you can get a baby faster if you just add more women.
I’ve been watching a lot of indie coders do a lot of single loop coding, which obviously will not scale across several cores. They are obsessed with avoiding L1/L2 cache misses, which is fine, but at the same time, they rarely talk about multi-threading. Some of the benchmarks I have seen leaves the CPU at 30% utilization across all cores, which means that there’s a lot of untapped potential.
Games with lots of autonomous and complex entities should scale quite well as – if the coder is not spending all his time on organizing his data in a way that makes little sense on a multi-core system, and is willing to shed the dogma that threads are bad.
I am not replacing my 3770K though. I was very tempted to get something that could substantially increase compilation times, but I spend <1% on compilations, so even a massive improvement in compilations would not really improve my productivity overall. And I am not thrilled on having to buy new RAM once again…
I recently saw a fun post on LinkedIn. Milestone Systems was bragging about how they have added GPU acceleration to their VMS, but the accompanying picture was from a different VMS vendor. My curiosity had the better of me, and I decided to look for the original press release. The image was gone, but the text is bad enough.
Let’s examine :
Pioneering Hardware Acceleration
In the latest XProtect releases, Milestone has harvested the advantages of the close relationships with Intel and Microsoft by implementing hardware acceleration. The processor-intensive task of decoding (rendering) video is offloaded to the dedicated graphics system (GPU) inside the processer [sic], leaving the main processor free to take on other tasks. The GPU is optimized to handle computer graphics and video, meaning these tasks will be greatly accelerated. Using the technology in servers can save even more expensive computer muscle.
“Pioneering” means that you do something before other people. OnSSI did GPU acceleration in version 1.0 of Ocularis, which is now 8 or 9 years old. Even the very old NetSwitcher app used DirectX for fast YUV conversion. Avigilon has been doing GPU acceleration for a while too, and I suspect others have as well. The only “pioneering” here is how far you can stretch the bullshit.
We have experimented with CUDA on a high end nVidia card years ago, but came to the conclusion that the scalability was problematic, and while the CPU would show 5%, the GPU was being saturated causing stuttering video when we pushed for a lot of frames.
Using Quick sync is the right thing to do, but trying to pass it off as “pioneering” and suggesting that you have some special access to Microsoft and Intel to do trivial things is taking marketing too far.
The takeaway is that I need to remind myself to make my first gen client slow as hell, so that I can claim 100% performance improvement in v2.0.
Yesterday I took a break from my regular schedule and added a simple, generic HTTP event source to Ocularis. We’ve had the ability to integrate to IFTTT via the Maker Channel for quite some time. This would allow you to trigger IFTTT actions whenever an event occurs in Ocularis. Soon, you will be able to trigger alerts in Ocularis via IFTTT triggers.
For example, IFTTT has a geofence trigger, so when someone enters an area, you can pop the appropriate camera via a blank screen. The response time of IFTTT is too slow and I don’t consider it reliable for serious surveillance applications, but it’s a good illustration of the possibilities of an open system. Because I am lazy, I made a trigger based on Twitter, that way I did not have to leave the house.
Making a HTTP event source did not require any changes to Ocularis itself. It could be trivially added to previous version if one wanted to do that, but even if we have a completely open system, it doesn’t mean that people will utilize it.
Nauseating and sweaty I remove my VR goggles. I feel sick, and I need to lie down. Resident Evil 7 works great in VR because things can sneak up on you from behind. You have to actually turn your head to see what was making that noise behind you.
On a monitor I can do a full panoramic dewarp from several cameras at once, and the only nausea I experience is from eating too many donuts too fast. There’s no “behind” and I have a superhuman ability to see in every direction, from several locations, at once. A friend of mine who played computer games competitively (before it was a thing), used the maximum fov available to give him an advantage to mere humans.
One feature that might be of some use is the virtual video wall. It’s very reminiscent of the virtual desktop apps that are already available.
And I am not even sure about the gaming aspect of VR. In the gaming world, people are already wondering if VR is dead or dying. Steam stats seem to suggest that it is the case, and when I went to try the Vive in the local electronics supermarket, the booth was deserted and surrounded by boxes and gaming chairs. Apparently you could book a trial run, but the website to do so was slow, convoluted and filled with ads.
Time will tell if this takes off. I am not buying into it yet.
Get a Raspberry Pi, or one of the very cheap clones. Then install FFMpeg and an RTMP server with RTSP capability (EvoStream, Wowza).
Make sure the RTMP server is operational.
Ask FFMpeg to convert from MxPEG to H.264 and broadcast to the RTMP server, by entering this command (on one line)
ffmpeg -f mxg -i http://[user]:[pass]@[camera-host]/cgi-bin/faststream.jpg?stream=mxpeg -codec:v libx264 -b:v 500k -maxrate 500k -bufsize 1000k -vf scale=-1:480 -threads 0 -an -f flv [rtmp address]
If you are using EvoStream, you might have entered something like this for the RTMP address:
rtmp://[ip of evostream]/live/mobotix
If that is the case, you can add a generic RTSP camera to your VMS
rtsp://[ip of evostream]/mobotix
The MxPEG stream will now be converted to H.264 and recorded as such. You’ll miss out on the advantages of MxPEG, but sometimes there’s no other way around the issue.
John Carmack is arguable a genius, and when Facebook lost to ZeniMax, he vented his frustration on Facebook. When programmers meet lawyers, the programmer usually ends up frustrated. When someone argues that “doing nothing can be considered doing something” then you start wondering if you are the only sane person in the room, or if you are being gaslighted by someone in suit.
I think John Carmack failed to realize just how far reaching non-compete covenants can be. In some states, an employment contract can contain elements that severely limit your ability to work in related industries, and – perhaps surprisingly – the company often owns everything you create while under contract, even if it was made in your spare time. In many cases, the company does the right thing, and let’s you own that indie-game you wrote on weekends and nights, but when Facebook buys a company for $2 billion, someone might catch the smell of sweet, sweet moolah.
Here’s how I see it.
In April 2012, John Carmack is working for ZeniMax and engages with Palmer Luckey regarding the initial HMD. It seems to me that John Carmack probably thought that what he did in his spare time was of no concern to ZeniMax, and that he was free to have fun with VR since ZeniMax was not interested in any of it.
At QuakeCon 2012 (August), Luckey Palmer is on stage with both Michael Abrash and John Carmack, talking about VR. Carmack, at this point in time is clearly a ZeniMax employee, and I have a very hard time thinking that Carmack didn’t work on VR related research at this point in time.
In August 2013, John Carmack leaves ZeniMax and joins Oculus. Then starts working full time on the VR stuff. ZeniMax doesn’t seem to care. Perhaps they expected that Oculus would soon crash and burn (it probably would have w/o Facebook intervening).
Less than a year later, in July 2014, 2 years after Palmer Luckey and John Carmack exchanged a few words on a message board, Oculus is worth $2 billion dollars to Facebook (According to Mark Zuckerberg they are now $3.5 bn in the hole with this acquisition).
John Carmack says that not a single line of ZeniMax code was used, and while that may be technically true, you could say that ZeniMax founded the research to figure the 700 ways not to make a lightbulb, Carmack then moves to Oculus, bangs the code together and the rest is history.
It’s pretty easy to convince someone who is not a programmer, that code was copied. It’s pretty easy to find a technically competent person who will say that code was copied, even if all programmers know that a lot of code looks kinda similar. Rendering a quad using OpenGL looks pretty much exactly the same in all apps, but is it copyright infringement?
Time will tell if Facebook/Oculus wins the appeal. I think the current verdict is fair (the initial $6 bn claim was idiotic).
An old colleague of mine made a post about trust, freedom and having everyone humming along to the same tune, and how the company’s (I suppose superior) culture could not easily be emulated by competitors.
Here’s a choice excerpt:
When businesses start losing touch with reality because of an arrogant belief in their own superiority and their company mission, they tend to adopt a pervasively positive attitude. The more insular the company’s outlook, the more buoyant its managers will tend to be about the company’s prospects.
In May 2008, Mary Poppendieck did a presentation on leadership in software development at Google. In it, she points out that at Toyota and 3M the product champions are “deeply technical” and “not from marketing”. The reason this works, she states, is that you need to “marry the technical possibilities with what the market wants”. If the products are driven by marketing people, the engineers will constantly be struggling to explain why perpetual machines won’t work, even if the market is screaming for it. So, while other companies are building all-electric vehicles and hybrids, your company is chasing a pipe-dream.
Innovative ideas are not necessarily technically complex, and may not always require top technical talent to implement. However, such ideas are often either quickly duplicated by other players, or rely on user lock-in to keep the competition at bay. E.g. Facebook and Twitter are technically simple to copy (at small scale), but good luck getting even 100 users to sign up. Nest made a nice thermostat, but soon after the market offered cheaper alternatives. Same with Dropcam. With no lock-in, there is no reason for a new customer to pick Dropcam over something cheaper.
To be truly successful, you therefore need to have the ability to see what the market needs, even if the market is not asking for it. If the market is outright asking, then everyone else hears that, and thus it’s hardly innovation. That doesn’t mean that you should ignore the market, obviously you have to listen to the market and offer solutions for the trivial requests that do make sense (cheaper, better security, faster processing, higher resolution and so on), and weed out the ones that don’t (VR, Blackberry, Windows Mobile). It doesn’t matter how innovative you are, if you don’t meet the most basic requirements.
However, it’s not just a question of whether something is technically possible, it’s also a question as to whether your organization posses the technical competency and time to provide a solution.If your team has an extremely skilled SQL programmer, but the company uses 50% of her time to pursue pipe-dreams or work on trivialities (correcting typos, moving a button, adding a new field), then obviously less time is available to help the innovation along.
Furthermore, time is wasted by doing things in a non-optimal sequence and failing to group related tasks into one update whenever possible. This seem to happen when technical teams are managed by non-technical people (or “technical” people who are trained in unrelated areas). Eventually, the team will stop arguing that you really should install the handrail before the hydrant, and simply either procrastinate or do what they are told at great direct (and indirect!) cost.
At GOTO 2016, Mary states that 50% of decisions made by product managers are wrong, and 2/3 of what is being specced is not necessary and provides no value to the end user, therefore, she argues, development teams must move from “delivery teams” to “problem solving teams”, and discard the notion that the product manager is a God-like figure that is able to provide a long list of do-this and do-that to his subordinates. Instead, the product manager must
To do this, I too, believe the PM must be highly technical so that they have the ability to propose possible solutions to the team (when needed). Without technical competency (and I am not talking about the ability to use Excel like a boss here), the PM will not be able to make the appropriate tradeoffs and will instead engage in very long and exhaustive battles with developers who are asked to do something impossible.
Is Mary correct? Or does she not realize that developers are oddballs and mentally incapable of “understanding the market”? Comments welcome.