My Hikvision Story

Got this…

cables

added this…

hikvision

next thing you know…

van

solution…

tinfoil

 

When You Are “Hacked”

Sometime in 2014, I received a database dump from a high profile industry site. I received the file from an anonymous file sharing site via a Twitter user that quickly disappeared. The database contained user names, mail addresses, password hashes (SHA1), the salt used, IP address used to access the site and the approximate geographical location (IP geolocation lookup – nothing nefarious).

I had canceled my subscription in January 2014, and the breach happened later than that. I don’t believe I received a notification of a breach of the database. Many others did, but I absolutely would remember if I had received one – in part because I discussed the breach with a former employee at the blog, and in part, because I was in possession of said DB.

A user reached out to me, seemingly puzzled as to why I would be annoyed by not receiving a notification – seeing as I was no longer a member, why would I care that my credentials were leaked. No-one would be able to log into the site using my account anyways.

Here’s the issue I have with that. I happen to have different passwords for different things – but a lot of people do not. A lot of people use the same password for many different things. Case in point, say you find a user with the email address someuser@gmail.com, and someone uses a rainbow attack and finds the password, do you think there’s a likelihood that the same password would work if they try to log into the mail account at Gmail? Sure, it’s bad to reuse passwords, but do people do it. You bet.

So, when your site is breached, I think you have an obligation to inform everyone affected by the breach – regardless of whether they are current members or not. I would imagine anyone in the security industry would know this.

Tagged

Camera Proxy

There’s a lot of paranoia in the industry right now, some warranted, some not. The primary issue is that when you plug something into your network you basically have to trust the vendor to not spy on you “by design” and to not provide a trivial attack vector to 3rd parties.

First things first. Perhaps you remember that CCTV means Closed Circuit Television. Pay attention to those first two words. I am pretty sure 50% or more of all “CCTV” installations are not closed at all. If your CCTV system is truly closed, there’s no way for the camera to “call home”, and it is impossible for hackers to exploit any attack vectors because there’s no access from the outside world to the camera. There are plenty of PC’s running terrible and vulnerable software out there, but as long as these systems are closed, there’s no problem. Granted, it also limits the flexibility of the system. But that’s the price you pay for security.

In the opposite end of the spectrum are cameras that are directly exposed to the internet. This is a very bad idea, and most professionals probably don’t do that. Well… some clearly do, because a quick scan of the usual sites reveal plenty of seemingly professional installations where cameras are directly accessible from the internet.

To expose a camera directly to the internet you usually have to alter the NAT tables in your router/firewall. This can be a pain in the ass for most people, so another approach is used called hole-punching. This requires a STUN server between the client sitting outside the LAN (perhaps on an LTE connection via AT&T) and the camera inside the LAN. The camera will register with the STUN server via an outbound connection. Almost all routers/firewalls allow outbound connections. The way STUN servers work, probably confuse some people, and they freak out when they see the camera making a connection to “suspicious” IP but that’s simply how things work, and not a cause for alarm.

Now, say you want to record the cameras in your LAN on a machine outside your LAN, perhaps you want an Azure VM to record the video, but how will the recorder on Azure (outside your LAN) get access to your cameras that are inside the LAN unless you set up NAT and thus expose your cameras directly to the internet?

This is where the $10 camera proxy comes in (the actual cost is higher because you’ll need an SD card and a PSU as well).

So, here’s a rough sketch of how you can do things.

  • On Azure you install your favorite VMS
  • Install Wowza or EvoStream as well

EvoStream can receive an incoming RTMP stream, and make the stream available via RTSP, it basically changes the protocol, but uses the same video packets (no transcoding). So, if you were to publish a stream at say rtmp://evostreamserver/live/mycamera, that stream will be available at rtsp://evostreamserver/mycamera. You can then add a generic RTSP camera that reads from rtsp://evostreamserver/mycamera to your VMS.

The next step is to install the proxy, you can use a very cheap Pi clone, or a regular PC.

  • Determine the RTSP address of the camera in question
  • Download FFMpeg
  • Set up FFMpeg so that it publishes the camera to EvoStream (or Wowza) on Azure

Say you have a camera that streams via rtsp://192.168.0.100/video/channels/1, the command looks something like this (all on one line)

ffmpeg -i rtsp://username:password@192.168.0.100/video/channels/1 
-vcodec copy -f flv rtmp://evostreamserver/live/mycamera

This will make your PC grab the AV from the camera and publish it to the evostream server on Azure, but the camera is not directly exposed to the internet. The PC acts as a gateway, and it only creates an outbound connection to another PC that you control as well.

You can now access the video from the VMS on Azure, and your cameras are not exposed at all, so regardless how vulnerable they are, they will not expose any attack vectors to the outside world.

Using Azure is just an example, the point is that you want to isolate the cameras from the outside world, and this can be trivially accomplished by using a proxy.

As a side note. If cameras were deliberately spying on their users, by design, this would quickly be discovered and published. That there are bugs and vulnerabilities in firmware is just a fact of life and not proof of anything nefarious, so calm down, but take the necessary precautions.

Orange Pi One

This is getting ridiculous.

I just received my $10 computer from China. I paid a premium for the (required) SD card as I do not have the patience to wait for one to arrive in the mail. My 5V/2A charger for my old, functional, PSP works as a power supply. I then downloaded Armbian and booted.

A few commands later, and I have a $20 dollar camera proxy.

I don’t actually plan to use it as my camera proxy, but as a small controller for a number of sensors I plan to add. For example using a cheap modified PIR sensor as input to the controller.

As you may know, I also have a Raspberry Pi 2. This little device is incredibly stable, and has only been rebooted once in the last 3 months, and that was by accident.

Hopefully you’ll be able to get a $100 device that you simply plug into your infrastructure, and that little device will work as standalone, or as a node in a much larger VMS, but that’s a bigger project that I might pick up later.

Some of the commands I used :

sudo apt-get update

sudo apt-get install ffmpeg

ffmpeg -i rtsp://...... -vcodec copy -f flv rtmp://....

Prism Skylabs

I like Prism Skylabs; they seem to have their shit together. It seems like a very well designed UI, and the idea – at first glance – seems to be great. Wouldn’t it be great to get feedback on your store layout strategy?

Pearson’s Law:

That which is measured improves. That which is measured and reported improves exponentially.

Unfortunately, I’ve measured my pet rock for the last 6 months to no avail. That damn thing hasn’t improved one bit.

There’s no question that Prism Skylabs can provide some interesting metrics in a very beautiful package, but I started wondering if Prism Skylabs was able to prove that using the system will increase revenue (and ultimately profit) for stores that actively use the system. I believe that most stores have pretty fine-tuned systems for correlating ads, events, placement and revenue based on years of experience (and most likely recorded video).

Maybe it’s like tracking the time it takes for me to drive through the city every day. When I get to my destination I already know if I had a good day, and if my phone told me “today it took 20% longer than usual” there’s not much I could do with that. It doesn’t make sense for me to go back to the beginning and do the run once again. I can try again tomorrow, but the next day the circumstances may have changed. And the system can’t say, for certain, that if I took another route I’d be 20% faster in the long term. It can just measure what happened on a given day, under those circumstances, and those circumstances will have changed tomorrow.

So, if you get Prism Skylabs, do you keep re-iterating and shuffling your wares around on the shelf? When do you stop? What if shuffling things around improves things in the very short term, but is counter productive on the long term (where the hell are the Twizzlers now?)

But it does look nice…

Penny Pincher

It’s been 2 years since I built my current workstation, and it’s still a very capable machine. It has an i7 3770K, 32GB RAM and a nVidia GTX 670. There really isn’t a rational reason to upgrade. While full recompilation of the source code takes 10 minutes, I rarely need to do so, and so most of the time, the limitation is really on how quickly I can type and move the mouse around.

I suppose it’s like getting a new car. How often do you really need a new car? And do you really need a car of that size, with that acceleration? In most cases, the answer is no. Yet people buy new cars they don’t need all the time.

So I might be able to rationalize that getting a trophy workstation is irrational, but normal, and so, therefore, it is OK for me to indulge. But then I look at the cost/performance of the high-end gear and then the predicament returns. Do I buy the high-end gear, that I want, but is too expensive compared the performance it offers, or do I go for the sweet spot? For example, the i7 6700K quad-core offers pretty good performance vs the more expensive 6800K hex-core CPU, but the 6700K will not deliver a noticeable performance boost compared to the 3770K…

In many ways it is similar to getting a fast car; you have seen people drive these cars fast or on winding roads mountain roads, and you might do so too, once in awhile, from time to time, but nowhere near as often as you make yourself believe when you get it. Same with the PC, I see people running GTA V and Battlefield 1 at a level of fidelity that is just mind blowing, but I know, in my heart, that I won’t spend more than an hour per month playing these games. Perhaps I am paying for the privilege of knowing that if I wanted to, I too could play Titanfall 2 at the ultra setting.

Perhaps I will just buy some DIY IoT gear and have fun with that…

 

RAM Buffers

About 13 years ago, we had a roundtable discussion about using RAM for the pre-buffering of surveillance video. I was against it. Coding wise, it would make things more complicated (we’d essentially have 2 databases), and the desire to support everything, everywhere, at any time made this a giant can of worms that I was not too happy to open. At the time physical RAM was limited, and chances were that the OS would then decide to push your RAM buffer to the swap file, causing severe degradation of performance. Worst of all, it would not be deterministic when things got swapped out, so all things considered, I said Nay.

As systems grew from the 25 cameras that was the maximum number supported on the flagship platform (called XXV), to 64 and above, we started seeing severe bottlenecks in disk IO. Basically, since pre-buffering was enabled per default, every single camera would pass through the disk IO subsystem only to be deleted 5 or 10 seconds later. A quick fix was to disable pre-buffering entirely, which would help enormously if the system only recorded on event, and the events were not correlated across many cameras.

However, recently, RAM buffering was added to the Milestone recorders, which makes sense now that you have 64 bit OS’s with massive amounts of RAM.

I always considered “record on event” as a bit of a compromise. It came about because people were annoyed with the way the system would trigger when someone passed through a door. Instead of having the door being closed at the start of the clip, usually the door would be 20% open by the time the motion-detection triggered, thus the beginning of the door opening would be missing.

A pre-buffer was a simple fix, but some caveats came up: systems that were setup to record on motion, would often record all through the night, due to noise in the images. If the system also triggered notifications, the user would often turn down the motion detection sensitivity until the false alarms disappeared. This had the unfortunate side effect of making the system too dull to properly detect motion in daylight, and thus you’d get missing video, people and cars “teleporting” all over the place and so on. Quite often the user would not realize the mistake until an incident actually did occur, and then it’s too late.

Another issue is that the video requires a lot more bandwidth when there is a lot of noise in the scene. This meant that at night, all the cameras would trigger motion at the same time, and the video would take up the max bandwidth allocated.

filesize

Video bandwidth over several days. The high bandwidth areas are night-time recordings.

filesize_zoom

24 hour zoom

Notice that the graph above reaches the bandwidth limit set in the configuration in the camera and then seemingly drops through the night. This is because the camera switches to black/white which requires less bandwidth. Then, in the morning, you see a spike as the camera switches back to color mode. Then it drops off dramatically during the day.

Sticking this in a RAM based prebuffer won’t help. You’ll be recording noise all through the night, from just about every camera in your system, completely bypassing the RAM buffer. So you’ll see a large number of channels trying to record a high bandwidth video – which is the worst case scenario.

Now you may have the best server side motion detection available in the industry, but what good does it do if the video is so grainy you can’t identify anyone in the video (it’s a human – sure – but which human?).

During the day (or in well-lit areas), the RAM buffer will help, most of the time, the video will be sent over the network, reside in RAM for 5-10-30 seconds, and then be deleted, never to be seen again – ever. This puts zero load on disk IO and is basically the way you should do this kind of thing.

But this begs the questions – do you really want to do that? You are putting a lot of faith in the systems ability to determine what might be interesting now, and possibly later, and in your own ability to configure the system correctly. It’s very easy to see when the system is creating false alarms, it is something entirely different to determine if it missed something. The first problem is annoying, the latter makes your system useless.

My preference is to record everything for 1-2-3 days, and rely on external sensors for detection, which then also determines what to keep in long term storage. This way, I have a nice window to go back and review the video if something did happen, and then manually mark the prebuffer for long term storage.

“Motion detection” does provide some meta-information, that can be used later when I manually review the video, but relying 100% on it to determine when/what to record makes me a little uneasy.

But all else being equal, it is an improvement…

 

Mirai, mirai on the wall

Update: Mirai now has an evil sister 

Mirai seems to be the talk of the town, so I can’t resist, I have to chime in.

Let me start by stating that last week’s Dyn attack was predictable.

But I think it is a mistake to single out one manufacturer when others demonstrably have the same issues. Both Axis and NUUO recently had full disclosure documents released that both included the wonderful vulnerability “remote code execution” – which is exactly what you need to create the type of DDOS attack that hit Dyn last week.

There’s not much we can do right now. The damage has been done, and the botnet army is out there, probably growing every day, and any script-kiddie worth their salt will be able to direct the wrath of the botnet towards one or more internet servers that that dislike for one reason or other. When they do, it won’t make any headlines, it will just be another site that goes offline for a few days.

The takeaway (if you are naive) is that you just need a decent password and HW from reputable sources. While I would agree on both on the principle -the truth is that in many cases it is useless advice. For example. if you look at the Axis POC you’ll see that the attacker doesn’t need to know the password at all. (I did some probing, and I am not sure about that now).

The impact?

Impact
++++++
The impact of this vulnerability is that taking into account the busybox that runs behind (and with root privileges everywhere. in all the binaries and scripts) is possible to execute arbitrary commands, create backdoors, performing a reverse connection to the machine attacker, use this devices as botnets and DDoS amplification methods… the limit is the creativity of the attacker.

And look, the affected cameras/encoders are based on BusyBox. A popular platform, and therefore a juicy target (see BashLite and ShellShock)

In other words:

Zombie

I am not suggesting that Axis is better or worse than anyone else, the point is that even the best manufacturers trip up from time to time. It can’t be avoided, but thinking that it’s just a matter of picking a decent password is not helping things.

My recommendation is to not expose anything to the internet unless you have a very, very good reason to do so. If you have a reason, then you should wrap your entire network in a VPN, so that you are only able to connect to the cameras via the VPN, and not via the public Internet.

My expectation is that no-one is going to do that, but instead they will change the password from “admin” to something else, and then go back to sleep.

TileMill and Ocularis

A long, long time ago, I discovered TileMill. It’s an app that lets you import GIS data, style the map and create a tile-pyramid, much like the tile pyramids used in Ocularis for maps.

tilemill

There are 2 ways to export the map:

  • Huge JPEG or PNG
  • MBTiles format

So far, the only supported mechanism of getting maps into Ocularis is via a huge image, which is then decomposed into a tile pyramid.

Ocularis reads the map tiles the same way Google Maps (and most other mapping apps) reads the tiles. It simply asks for the tile at x,y,z and the server then returns the tile at that location.

We’ve been able to import Google Map tiles since 2010, but we never released it for a few reasons:

  • Buildings with multiple levels
  • Maps that are not geospatially accurate (subway maps for example)
  • Most maps in Ocularis are floor plans, going through google maps is an unnecessary extra step
  • Reliance on an external server
  • Licensing
  • Feature creep

If the app is relying on Google’s servers to provide the tiles, and your internet connection is slow, or perhaps goes offline, then you lose your mapping capability. To avoid this, we cache a lot of the tiles. This is very close to bulk download which is not allowed. In fact, at one point I downloaded many thousands of tiles, which caused our IP to get blocked on Google Maps for 24 hours.

Using MBTiles

Over the weekend I brought back TileMill, and decided to take a look at the MBTile format. It’s basically a SQLite DB file, with each tile stored as a BLOB. Very simple stuff, but how do I serve the individual tiles over HTTP so that Ocularis can use them?

Turns out, Node.js is the perfect tool for this sort of thing.

Creating a HTTP server is trivial, and opening a SQLite database file is just a couple of lines. So with less than 50 lines of code, I had made myself a MBTile server that would work with Ocularis.

tileserver

A few caveats : Ocularis has the Y axis pointing down, while MBTiles have the Y axis pointing up. Flipping the Y axis is simple. Ocularis has the highest resolution layer at layer 0, MBTiles have that inverted, so the “world tile” is always layer 0.

So with a few minor changes, this is what I have.

 

I think it would be trivial to add support for ESRI tile servers, but I don’t really know if this belongs in a VMS client. The question is time was not better utilized by making it easy for the GIS guys to add video capabilities to their app, rather than having the VMS client attempt to be a GIS planning tool.

 

Tagged ,