next thing you know…
next thing you know…
Sometime in 2014, I received a database dump from a high profile industry site. I received the file from an anonymous file sharing site via a Twitter user that quickly disappeared. The database contained user names, mail addresses, password hashes (SHA1), the salt used, IP address used to access the site and the approximate geographical location (IP geolocation lookup – nothing nefarious).
I had canceled my subscription in January 2014, and the breach happened later than that. I don’t believe I received a notification of a breach of the database. Many others did, but I absolutely would remember if I had received one – in part because I discussed the breach with a former employee at the blog, and in part, because I was in possession of said DB.
A user reached out to me, seemingly puzzled as to why I would be annoyed by not receiving a notification – seeing as I was no longer a member, why would I care that my credentials were leaked. No-one would be able to log into the site using my account anyways.
Here’s the issue I have with that. I happen to have different passwords for different things – but a lot of people do not. A lot of people use the same password for many different things. Case in point, say you find a user with the email address email@example.com, and someone uses a rainbow attack and finds the password, do you think there’s a likelihood that the same password would work if they try to log into the mail account at Gmail? Sure, it’s bad to reuse passwords, but do people do it. You bet.
So, when your site is breached, I think you have an obligation to inform everyone affected by the breach – regardless of whether they are current members or not. I would imagine anyone in the security industry would know this.
There’s a lot of paranoia in the industry right now, some warranted, some not. The primary issue is that when you plug something into your network you basically have to trust the vendor to not spy on you “by design” and to not provide a trivial attack vector to 3rd parties.
First things first. Perhaps you remember that CCTV means Closed Circuit Television. Pay attention to those first two words. I am pretty sure 50% or more of all “CCTV” installations are not closed at all. If your CCTV system is truly closed, there’s no way for the camera to “call home”, and it is impossible for hackers to exploit any attack vectors because there’s no access from the outside world to the camera. There are plenty of PC’s running terrible and vulnerable software out there, but as long as these systems are closed, there’s no problem. Granted, it also limits the flexibility of the system. But that’s the price you pay for security.
In the opposite end of the spectrum are cameras that are directly exposed to the internet. This is a very bad idea, and most professionals probably don’t do that. Well… some clearly do, because a quick scan of the usual sites reveal plenty of seemingly professional installations where cameras are directly accessible from the internet.
To expose a camera directly to the internet you usually have to alter the NAT tables in your router/firewall. This can be a pain in the ass for most people, so another approach is used called hole-punching. This requires a STUN server between the client sitting outside the LAN (perhaps on an LTE connection via AT&T) and the camera inside the LAN. The camera will register with the STUN server via an outbound connection. Almost all routers/firewalls allow outbound connections. The way STUN servers work, probably confuse some people, and they freak out when they see the camera making a connection to “suspicious” IP but that’s simply how things work, and not a cause for alarm.
Now, say you want to record the cameras in your LAN on a machine outside your LAN, perhaps you want an Azure VM to record the video, but how will the recorder on Azure (outside your LAN) get access to your cameras that are inside the LAN unless you set up NAT and thus expose your cameras directly to the internet?
This is where the $10 camera proxy comes in (the actual cost is higher because you’ll need an SD card and a PSU as well).
So, here’s a rough sketch of how you can do things.
EvoStream can receive an incoming RTMP stream, and make the stream available via RTSP, it basically changes the protocol, but uses the same video packets (no transcoding). So, if you were to publish a stream at say rtmp://evostreamserver/live/mycamera, that stream will be available at rtsp://evostreamserver/mycamera. You can then add a generic RTSP camera that reads from rtsp://evostreamserver/mycamera to your VMS.
The next step is to install the proxy, you can use a very cheap Pi clone, or a regular PC.
Say you have a camera that streams via rtsp://192.168.0.100/video/channels/1, the command looks something like this (all on one line)
ffmpeg -i rtsp://username:firstname.lastname@example.org/video/channels/1 -vcodec copy -f flv rtmp://evostreamserver/live/mycamera
This will make your PC grab the AV from the camera and publish it to the evostream server on Azure, but the camera is not directly exposed to the internet. The PC acts as a gateway, and it only creates an outbound connection to another PC that you control as well.
You can now access the video from the VMS on Azure, and your cameras are not exposed at all, so regardless how vulnerable they are, they will not expose any attack vectors to the outside world.
Using Azure is just an example, the point is that you want to isolate the cameras from the outside world, and this can be trivially accomplished by using a proxy.
As a side note. If cameras were deliberately spying on their users, by design, this would quickly be discovered and published. That there are bugs and vulnerabilities in firmware is just a fact of life and not proof of anything nefarious, so calm down, but take the necessary precautions.
This is getting ridiculous.
I just received my $10 computer from China. I paid a premium for the (required) SD card as I do not have the patience to wait for one to arrive in the mail. My 5V/2A charger for my old, functional, PSP works as a power supply. I then downloaded Armbian and booted.
A few commands later, and I have a $20 dollar camera proxy.
I don’t actually plan to use it as my camera proxy, but as a small controller for a number of sensors I plan to add. For example using a cheap modified PIR sensor as input to the controller.
As you may know, I also have a Raspberry Pi 2. This little device is incredibly stable, and has only been rebooted once in the last 3 months, and that was by accident.
Hopefully you’ll be able to get a $100 device that you simply plug into your infrastructure, and that little device will work as standalone, or as a node in a much larger VMS, but that’s a bigger project that I might pick up later.
Some of the commands I used :
sudo apt-get update sudo apt-get install ffmpeg ffmpeg -i rtsp://...... -vcodec copy -f flv rtmp://....
“Neverending stooooooryyyyyyy… ”
I like Prism Skylabs; they seem to have their shit together. It seems like a very well designed UI, and the idea – at first glance – seems to be great. Wouldn’t it be great to get feedback on your store layout strategy?
That which is measured improves. That which is measured and reported improves exponentially.
Unfortunately, I’ve measured my pet rock for the last 6 months to no avail. That damn thing hasn’t improved one bit.
There’s no question that Prism Skylabs can provide some interesting metrics in a very beautiful package, but I started wondering if Prism Skylabs was able to prove that using the system will increase revenue (and ultimately profit) for stores that actively use the system. I believe that most stores have pretty fine-tuned systems for correlating ads, events, placement and revenue based on years of experience (and most likely recorded video).
Maybe it’s like tracking the time it takes for me to drive through the city every day. When I get to my destination I already know if I had a good day, and if my phone told me “today it took 20% longer than usual” there’s not much I could do with that. It doesn’t make sense for me to go back to the beginning and do the run once again. I can try again tomorrow, but the next day the circumstances may have changed. And the system can’t say, for certain, that if I took another route I’d be 20% faster in the long term. It can just measure what happened on a given day, under those circumstances, and those circumstances will have changed tomorrow.
So, if you get Prism Skylabs, do you keep re-iterating and shuffling your wares around on the shelf? When do you stop? What if shuffling things around improves things in the very short term, but is counter productive on the long term (where the hell are the Twizzlers now?)
But it does look nice…
It’s been 2 years since I built my current workstation, and it’s still a very capable machine. It has an i7 3770K, 32GB RAM and a nVidia GTX 670. There really isn’t a rational reason to upgrade. While full recompilation of the source code takes 10 minutes, I rarely need to do so, and so most of the time, the limitation is really on how quickly I can type and move the mouse around.
I suppose it’s like getting a new car. How often do you really need a new car? And do you really need a car of that size, with that acceleration? In most cases, the answer is no. Yet people buy new cars they don’t need all the time.
So I might be able to rationalize that getting a trophy workstation is irrational, but normal, and so, therefore, it is OK for me to indulge. But then I look at the cost/performance of the high-end gear and then the predicament returns. Do I buy the high-end gear, that I want, but is too expensive compared the performance it offers, or do I go for the sweet spot? For example, the i7 6700K quad-core offers pretty good performance vs the more expensive 6800K hex-core CPU, but the 6700K will not deliver a noticeable performance boost compared to the 3770K…
In many ways it is similar to getting a fast car; you have seen people drive these cars fast or on winding roads mountain roads, and you might do so too, once in awhile, from time to time, but nowhere near as often as you make yourself believe when you get it. Same with the PC, I see people running GTA V and Battlefield 1 at a level of fidelity that is just mind blowing, but I know, in my heart, that I won’t spend more than an hour per month playing these games. Perhaps I am paying for the privilege of knowing that if I wanted to, I too could play Titanfall 2 at the ultra setting.
Perhaps I will just buy some DIY IoT gear and have fun with that…