name: inverse class: center, middle, inverse layout: true
@unixbigot — Christopher Biggs — Continuous Dashboarding
--- layout: true
@unixbigot — Christopher Biggs — Continuous Dashboarding
--- class: center, middle template: inverse # Continuous Dashboarding - Your DevOps Airbag .green[Building and shipping Agile dashboards using your existing data silos.] ![](node-red-title-flow.png) .bottom.right[ Christopher Biggs
.blue[@unixbigot] ] ??? Gday everyone. Today I want to talk about things that I've learned over the last dozen years working to keep web-commerce and software-as-a-service businesses alive, with varying degrees of success. I want to tell you about why I think you should treat your business intelligence systems as seriously as your customer-facing products, and I'm going to cover some techniques and tools to help you do this. --- .left-column[ # Agenda ] .right-column[ ### **Strategy**: * .red[**Who**] are your dashboards for? * .orange[**What**] to monitor * .yellow[**When**] to build your dashboards ### **Tactics**: * .green[**How**] to develop dashboards * .blue[**Where**] to fit dashboards into your DevOps pipeline * .purple[**With**] some suggested tools ] ??? First, I'll speak about why its important to consider that different users of data have unique needs, and I'll go into what kinds of things you can do with your data beyond just projecting it on the wall. Next I want to stress why it's valuable to have the same people who write code thinking about data, and working on both at the same time. After this we'll get more practical and look at how to apply DevOps principles to dashboards and data. --- class: center, middle template: inverse # Who? ## Dashboards and Business Intelligence (BI) are .red[Product] ### Think about who your .green[customer] is ??? First I want to consider the different consumers of data, and how you need to serve them according to their needs and inclinations. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Who are your dashboards for? #### Dashboards are traditionally targeted at ops. * Measure **all** the things! * Wake me if it's on **fire** * Little feedback goes back **upstream** .img-800w[![Munin](munin-boring.png)] ] ??? Traditionally dashboards have been for operations staff, they've been highly technical and detailed, measuring a huge number of things but at a very low level, and often just not very pleasant to look at. Monitoring and alerting has focused on servers and services, rather than the application domain. The other thing I've noticed is that operations people tend to learn a lot about the way that users and systems behave, but don't often have an opportunity to pass that back up the development pipeline. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Who are your dashboards for? #### Traditionally targeted at ops, but #### DevOps is a thing. * Development is a different **point of view** * **Instrument** your applications * Measure **trends** and **levels** in your KPIs * Look for **out-of-range** metrics * What does normal **look** like? * Regression-test your **bottom line** ] ??? So, the first group of people who who aren't being served well are developers. Disk io-ops per second is not the most important thing to a developer. What they really need to know is whether their application is working. When I say application I'm abbreviating the term "applications software", that is, software that *does* something. They key question here is did I, as a developer, achieve what I set out to do. Does my feature work, is my user interface working the way I planned, have I positively or negatively affected the experience of using this product, did I advance the business plan. Businesses make money by providing services to customers, you need to understand what are your true services and points of interaction, understand what normal looks like and ensure that when something that's *not* normal happens, the right people know about it. Startups learn all the time that their understanding of what their users wanted was all wrong, the key here is to learn this while you're still in business. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Who are your dashboards for? #### Traditionally targeted at ops, but #### DevOps is a thing #### Support and Sales need to know everything's OK, too * Is the first time you know of problems when support escalates? * Do support call ops and ask "Is everything OK, we're getting a lot of calls"? * "Well Mrs Arbuckle, I see other customers are having problems too" ] ??? Your customer facing staff have data needs too. Often the first time a business learns that normal has left the building is when the support phones start ringing. I've found that for every person that calls there's 99 who just went somewhere else, so if you let your customers tell you about faults that's far too late. Wouldn't it be better if your support staff can know the probable cause of a customer issue before the customer has to explain it? --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Who are your dashboards for? #### Traditionally targeted at ops, but #### DevOps is a thing #### Support and Sales need to know everything's OK too #### Management want reassurance (and to know you deliver) * The morning report for "How long till my next **Porsche**" * Shipped a great new feature? You now have a **hotline** to the CEOs eyeballs. ] ??? Dashboards might be the only regular communication you have with upper management. It's a cliche that every executive loves a big fat meaningless pie chart for breakfast, but if you give your management a steady diet of interesting and varied intelligence, then you have a way to draw a direct connection between your existence and the bottom line. Your upper management probably don't see the day to day business of development, but this is your chance to show the sprint-to-sprint value added by your team. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Who are your dashboards for? #### Traditionally targeted at ops, but #### DevOps is a thing #### Support and Sales need to know everything's OK too #### Management want reassurance (and to know you deliver) #### Customers (do you dare?) * What are you **ashamed** of them seeing? * What could you **do** about that? ] ??? Okay, now this one might be scary. Would you show your customers your performance data, or your error rates? Why not, what are you ashamed of? Just asking yourself that question might teach you something valuable. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Give **customers** a weather report #### Take notice of load spikes * "We are processing your order. This hour that takes around 25 seconds but could be up to two minutes" ] ??? Let's look this another way. Data lets you soothe your customers. Every piece of software has a bad day, so if things aren't going well, and you know about it, you can let the customers down gently, give them your sympathy along with realistic expectations. People appreciate sincerity. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Give **customers** a weather report #### Take notice of load spikes #### Respond to temporary outages * "Sorry, it looks like PayPal has been experiencing high error rates for the last 15 minutes. May we suggest you try AliPay or Credit Card?" * This could head off a flood of support calls or bad reviews ] ??? If you work with banks you know they're a bit like elephants, inscrutable, difficult to win an argument with, and a source of mind boggling amounts of excrement when you least expect it. If your third party interfaces are on the fritz you can respond to that, head off frustration before it happens. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## Give **customers** a weather report #### Take notice of load spikes #### Respond to temporary outages #### Assure customers that errors are being acted on * "We're sorry, this page experienced a problem. Bug #3342 has already been forwarded to our Dev Team. You'll get an email when it's fixed, or check our .fakelink[change log]" * again, turn your failures into goodwill ] ??? Finally, why do customers have to call to report errors. You know the error happened, you threw an exception. Why don't you turn it around, tell the customer you're on it, and give them a way to follow up. You could even promise to notify them as soon as the fault is rectified. This could prevent them taking their money elsewhere. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## What's relevant to this **support** call? #### Notice recent errors affecting this customer * "Mr Lee encountered .fakelink[Bug #3342] six minutes ago" ] ??? Lets look at how data can help your support team. If you've instrumented your application, and you have a the ability to query that data, then you can have the ability to reconstruct what that particular user did. This turns the act of supporting a customer from palm reading to brain scanning. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## What's relevant to this **support** call? #### Notice recent errors affecting this customer #### Notice recent outages affecting this customer * "Currently experiencing elevated transaction declines for Westpac cards" ] ??? If there is a cluster of failures in your system thats important information for your support staff and customers to know. --- .left-column[ #Strategy ## Who ] .right-column.autodim[ ## What's relevant to this **support** call? #### Notice recent errors affecting this customer #### Notice recent outages affecting this customer #### Remember recent behaviour of this customer * "Dr Patel had her password reset yesterday, and presented her old password three times today" ] ??? Finally, sometimes customers are not reliable witnesses. There's a whole class of common software problems that we can fix if we store more data, even if only for a few hours. Don't be parsimonious in recording analytic data, record as much as you can, and if storage becomes a problem thin it out later. --- class: center, middle template: inverse # What? ## Not just **things**, also .green[patterns] and .red[trends] ??? All right, lets look at what kinds of data you should be shepherding. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** * system load indicators * rates of key events (pageloads, signups, checkouts etc.) * service status * error alerts ("pages") ] ??? Going back again to the traditional operations monitoring, what most people record is the vital statistics of their servers. A bunch of statistics that you have to understand intimately to even know if there is a problem or not. Nowadays we have elastic servers and multi-tier architectures. A simple graph of operations per second implicitly assumes that resources are fixed, which is rarely true any more. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** #### Now - **Patterns** and **Trends** * Business Goals - Rates plus Trends ] ??? What matters is money. Or rather, happiness, which money helps with. Are your customers, servers, and shareholders happy, and are they more or less happy than they were an hour ago. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** #### Now - **Patterns** and **Trends** * Business Goals - Rates plus Trends * Unavoidable errors - look for out-of-range levels ] ??? Another issue with traditional monitoring is its binary nature. Things are either good, or they're not and someone is getting turfed out of bed. But in reality there are class of faults which are going to be happening all the time, like natural background radiation, and what you want to know is whether or not there's a meaningful trend in the level of occurrence. Again, it's a matter of working out what's normal and looking for exceptions, but adaptively, becuase today's normal might be different from last week. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** #### Now - **Patterns** and **Trends** * Business Goals - Rates plus Trends * Unavoidable errors - look for out-of-range levels * Location and traffic patterns ] ??? Geolocation can be handy too. Data breaches, denial of service, and clickfraud are going on all the time, and if you can notice sudden shift in your demographics you may be alerted to imminent problems. I once spotted an extortion attack coming due to the browser traffic share stats looking hinky. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** #### Now - **Patterns** and **Trends** * Business Goals - Rates plus Trends * Unavoidable errors - look for out-of-range levels * Location and traffic patterns * User Experience metrics ] ??? I encourage you to think hard about what makes your product nice to use, or annoying to use, and track some indicators. Are people navigating into blind alleys, or experiencing poor responsiveness, or abandoning operations half completed. Record some user sessions and make the designers watch them, I guarantee they'll learn something. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** #### Now - **Patterns** and **Trends** * Business Goals - Rates plus Trends * Unavoidable errors - look for out-of-range levels * Location and traffic patterns * User Experience metrics * Health of your third-party interfaces ] ??? Third party interfaces are a big risk, they're often fragile and sometimes failures go unnoticed. I recall one incident where an edge case in payment processing had been dropping money on the floor for two years, and wasn't picked up because of poor data quality. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** #### Now - **Patterns** and **Trends** * Business Goals - Rates plus Trends * Unavoidable errors - look for out-of-range levels * Location and traffic patterns * User Experience metrics * Health of your third-party interfaces * App and product reviews (iTunes, Google, Yelp, Amazon...) ] ??? If you're working in mobile apps, you have a big risk with faults. Firstly, if you have a fault and don't notice, you could end up with a huge number of one-star reviews and tank your app. Secondly you don't want lukewarm reviews either. The tools are out there to consume your customer feedback, analyze it for sentiment, and highlight the outliers. If people are particularly happy or angry, you should look deeper into why, and if they're NOT then that's a concern too. But you should know which way the wind is blowing. --- .left-column[ #Strategy ## Who ## What ] .right-column.autodim[ ## What should I be monitoring? #### Traditionally - **Things** #### Now - **Patterns** and **Trends** * Business Goals - Rates plus Trends * Unavoidable errors - look for out-of-range levels * Location and traffic patterns * User Experience metrics * Health of your third-party interfaces * App and product reviews (iTunes, Google, Yelp, Amazon...) * Social media (facebook, twitter, et.al.) ] ??? The same goes for social media. Feed your facebook, twitter, yelp or whatever into a sentiment analyzer and measure your performance. My point in all of this is, and this might sting a bit, is who cares about servers, really. They're just boxes of expensive dirt. What you should be measuring and monitoring is success. --- class: center, middle template: inverse # When? ## Make Business Intelligence part of your .orange[**DevOps**] processes ??? Okay, now we're at the meat of it. What I like to call dashboard driven development. The idea that the very nucleus of Agile is to measure, iterate and learn. --- .left-column[ #Strategy ## Who ## What ## When ] .right-column.autodim.tight[ ## When should I build dashboards? #### Early and Often! * Dashboards inform your planning and strategy * Make sure you *really* know the status quo before you code ] ??? To start off, make sure you understand the status quo. You should measure what your users do, and where your value comes from, and make sure your whole team understands the lie of the land. This helps you concentrate your effort where the value is. --- .left-column[ #Strategy ## Who ## What ## When ] .right-column.autodim.tight[ ## When should I build dashboards? #### Early and Often! #### Continuously: Dashboards are (subjective) tests * **Hopefully** you already run tests before you ship * Think about what tests you should run **continuously** ] ??? I hope I don't have to convince anyone that testing is valuable. Software is brittle, but testing before release is only one set of conditions. Once the software goes out the door, conditions are continually changing. So your dashboards are a kind of test that keeps running after you ship, to make sure that the software keeps working the way you intended. --- .left-column[ #Strategy ## Who ## What ## When ] .right-column.autodim.tight[ ## When should I build dashboards? #### Early and Often! #### Continously: Dashboards are (subjective) tests #### Releases: Dashboards are longitudinal tests * How does this release compare to last release? ] ??? One benefit of testing at release time under static conditions is that you can compare this release to last one. Look for trends in performance, and responsiveness, and identify any problems. --- .left-column[ #Strategy ## Who ## What ## When ] .right-column.autodim.tight[ ## When should I build dashboards? #### Early and Often! #### Continuously: Dashboards are (subjective) tests #### Releases: Dashboards are longitudinal tests #### Fridays: Dashboards are your DevOps "airbag" * Is the bucket still under the money waterfall? * Is it Friday? Does that scare you? ] ??? Would you be scared to put out a release on a Friday afternoon? Why is that, if you have a system that needs a team of people on deck in case it explodes you have room for improvement. I'm waiting for the day that NASA stop doing countdowns. I don't want to hear "3 2 1, liftoff, yay nothing exploded", I want to hear "you're clear to proceed to runway five, have a nice flight". --- .left-column[ #Strategy ## Who ## What ## When ] .right-column.autodim.tight[ ## When should I build dashboards? #### Early and Often! #### Continuously: Dashboards are (subjective) tests #### Releases: Dashboards are longitudinal tests #### Fridays: Dashboards are your DevOps "airbag" #### Coding: Dashboards help you think about Features holistically * Who needs to know about this new feature? Training? * What other parts of our ecosystem will this affect? ] ??? I said this earlier but I want to repeat it. Techniques like pair programming, literate programming and rubber-duck debugging work because the very act of having to explain yourself causes you to think more thoroughly. Building a dashboard is asking yourself "how do I prove that I did a good job", and also consider all the users. It's not enough to build a feature so that it works, developing is communicating and you need to communicate to your internal customers, so that they can know if their design worked, if their marketing campaign worked, so that support can diagnose why a user is having a hard time. --- .left-column[ #Strategy ## Who ## What ## When ] .right-column.autodim.tight[ ## When should I build dashboards? #### Early and Often! #### Continuously: Dashboards are (subjective) tests #### Releases: Dashboards are longitudinal tests #### Fridays: Dashboards are your DevOps "airbag" #### Coding: Dashboards help you think about Features holistically #### During test: Dashboards help you plan for adverse outcomes * **Validate** your assumptions * What do I **do** if this breaks? * Who will notice a failure? Who needs to be **notified**? ] ??? So, your dashboards are a way to measure your success, because that's the real question when you're deciding whether your work is ready to go out the door. You want to know that you did what you set out to do, and conversely that if you failed to do it, the right people learn about that too. --- class: center, middle template: inverse # How? ## Encourage a .purple[**data-loving**] culture ??? So that's strategy, those are the things that I want you to do. The rest of this talk is going to be about some ways to get there. --- .left-column[ #Tactics ## Who ## What ## When ## How ] .right-column.bulletsh4[ ## So, How do I get there from here? #### **Experiment** with your data #### **Develop** dashboards alongside features #### **Test** your dashboards #### **Deploy** your BI configuration as build artifacts. ] ??? You need to love your data as much as your code. To refactor it when needed, and review it, and look for problems in it. --- .left-column[ #Tactics ## Who ## What ## When ## How ] .right-column.autodim.tight[ ## So, How do I get there from here? #### **Experiment** with your data * Derive synergy from bringing your silos together * Visual dataflow tools like .red[**Node-RED**] ([nodered.org](http://nodered.org/)) * Rapid dashboard tools like .green[**Blynk**] ([blynk.cc](http://blynk.cc)) * Explore your data with GUI tools like .purple[**Kibana**] ([elastic.co](http://elastic.co/)) .img-320w[![Node-RED](nodered-frontdoor.png)] .img-200h[![Blynk](blynk_bi_goals.png)] .img-320w[![Kibana](kibana-basics.jpg)] ] ??? Businesses always have more data than they realize, and you can learn interesting things by bringing it together. For example correlate your page load time to your conversion rate, to see how much it really matters. These are some tools that I really like, and I'll go into them in more detail later but the thing that they have in common is that you can build a targeted dashboard to answer a particular question in half an hour and then ship that dashboard out to data consumers, or just throw it away when you're done. --- .left-column[ #Tactics ## Who ## What ## When ## How ] .right-column.autodim[ ## So, How do I get there from here? #### **Experiment** with your data #### **Develop** dashboards alongside features * Orchestrate your BI stack (Docker, Vagrant, whatever) * Choose tools that produce shippable artifacts * Demonstrate your dashboards as a deliverable ] ??? What we want to do is empower developers to be able to explore and manipulate data. This means they need access to the tools. If you're hosting your own software, make sure developers can push a button and have a working installation. And choose tools that let you ship dashboards as artifacts somehow. You don't want to have to manually recreate your dashboard in a live system. Lastly, be proud of the dashboards, demonstrate them in your product showcases. --- .left-column[ #Tactics ## Who ## What ## When ## How ] .right-column.autodim[ ## So, How do I get there from here? #### **Experiment** with your data #### **Develop** dashboards alongside features #### **Test** your dashboards against your code * ...and vice-versa * Dashboards are **Customer-Facing** product. * Trustworthiness is crucial ] ??? When it comes to QA, you need to ensure that dashboards are first class citizens. You need to assure their quality. If you're targeting dashboards at management, support and customers, you can't be lying to them. --- class: center, middle template: inverse # Where? ## Dashboards as .blue[**code**] ??? Okay, next I want to look at integrating dashboards into a DevOps workflow. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ] .right-column.autodim[ ## Inserting dashboards into your development pipeline #### Code-review your Dashboards * Pretty-print JSON or XML data files * Use Visual diffing tools to highlight changes (eg. pdiffy) ] ??? I spoke earlier about preferring tools that let you ship dashboards as artifacts. With the right tools, you can version control and code-review your dashboards. The tools I like to use have wonderful visual editors, but still allow you to export the configuration as meaningful text. There's also some benefit in using perceptual diffing tools to highlight what you changed from one iteration to the next. These tools can take a before and after screenshot of a user interface or report and highlight the areas that have changed. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ] .right-column.autodim[ ## Inserting dashboards into your development pipeline #### Code-review your Dashboards #### Part of your CI test runs * Yes, you can unit-test dashboards * Leverage your integration tests to produce data * Behavioural testing tools to confirm UI outcomes ] ??? And you absolutely can do all kinds of automated testing of dashboards. You can unit-test your data handling logic in isolation, and you can provide known data to your rendering and user interactions to confirm you get the expected behavior. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ] .right-column.autodim[ ## Inserting dashboards into your development pipeline #### Code-review your Dashboards #### Part of your CI test runs #### Performance testing - daily/sprintly test * Build a test that stresses your product to breaking point * Confirm that the right alarms go off * Break things, then verify your ecosystem responds ] ??? It's important to assure consistent long term behaviour too. You should look at injecting failures into your test runs and confirming that your monitoring notices them and does the right thing. I like to break things and see what happens, and whether I have enought information to diagnose. It's an awful feeling to be in the middle of an incident and be wishing for data that you didn't record. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ] .right-column.autodim[ ## Inserting dashboards into your development pipeline #### Code-review your Dashboards #### Part of your CI test runs #### Performance testing - daily/sprintly test #### Automate deployment - easier with some tools than others * Best case - install configuration files as ordinary artifacts * APIs - pull and push * Database - dump/restore * Web scraping - here be dragons ] ??? And when you're happy with your dashboards you should be able to deploy them with no fuss, even on a Friday. Many of the tools I've used are capable of deploying configuration in just the same way as you'd deploy code or images. And where that's not possible, look at whether there's an API or database layer which you can use to transport configuration from dev to staging to production. --- class: center, middle template: inverse # With ## Case Study: Adding value to .yellow[**Logstash**] with .red[**Node-RED**] and .green[**Blynk**] ??? So now I want to give you some concrete examples of tools that I like to use. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ## With ] .right-column.tight[ ## .green[Elasticsearch], .yellow[Logstash], and .purple[Kibana] (ELK) * .green[**Elasticsearch**] Fast Distributed NoSQL database * .yellow[**Logstash**] (et.al) Push event streams, application logs, performance data and network stats to an Elasticsearch storage cluster * .purple[**Kibana**] Query, Visualise and Dashboard your data * ELK is Open Source: self-host, AWS, or SaaS * Off-the-shelf docker image suitable for dev, CI and production * Quick interactive querying * Interactively build visulisations and dashboards * Good support for import/export of dashboard components * Weak on certain kinds of analysis and alerting ] ??? Ok, now I hope you're already using tools like elk or splunk or similar. These are tools that turn your logfiles and other data streams into a data lake. What I like about ELK is that you can start by just dumping your logfiles and your server and network stats in and look at what patterns emerge. When you decide you're interested in a particular area, you target your logging at facilitating anaylsis. One of the reasons it's important to build dashboards as you write code is that you are thinking about what information the dashboard is going to need. Another good point about Kibana is that its configuration can be made nicely readable, you build all the searches and dashboards interactive in a web browser then export everythign as JSON. With a bit of effort you can automate the whole commit-review-test-deploy pipeline. --- class: center, middle .img-1000w[![Kibana](kibana-basics.jpg)] ??? Here's an example of a Kibana dashboard. All these elements can be built and refined in a web browser, and then the configuration can be exported as JSON files. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ## With ] .right-column.tight[ ## .red[Node-RED] - A Visual system for Wiring the Internet * Open source (Node.js) * Originated at IBM, now under JS Foundation * .red[**Flow**] based data processing - "Fire all the programmers", at last? * Create flows using drag+drop flow editor - `http://nodered.example.com:1800/` * Huge plugin ("node") ecosystem * Integrated dashboarding - `http://nodered.example.com:1800/ui` - or output values to other data stores ] ??? The Node-RED environment is an open-source tool written in Node.JS which lets programmers (and arguably non-programmers!) visually connect devices, APIs, databases and webpages in interesting ways. It was created for IoT and building automation. At home, I use it to unlock my front door, and water my plants. But in a business setting it's great for pulling data from multiple sources and acting on the big picture. It has plugins for pretty much every database, message bus and web API that you could find. Where I want to do trend analysis, or adaptive filtering, or sentiment analysis, this is the tool I reach for. If you have your traffic data in elastic search, and your sales data in postgres, and your server status in redis, then if you want to produce some report or alert combining pieces of that you can build a flow in Node-RED that pulls all the information together. --- ## Business Goal Monitor .img-420h[![Business Goal Monitor](bi_goals.png)] .img-420h[![Business Goal Dashboard](blynk_bi_goals.png)] ??? Here's an example to give you flavour, you define inputs, which emit messages. Then you route those messages to processing nodes which think about the data, and then emit messages to output nodes. In this case we're pulling business goal data from elastic search, summarising it, and then pushing the metrics out to an iphone app. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ## With ] .right-column.small.tight[ ## .red[Node-RED] - Nodes and Flows * The .blue[**coloured boxes**] are .red[**Nodes**] * Nodes are **npm** modules * Drag and drop nodes. Connect with links to form .green[**Flows**] * .green[**Flows**] persist as JSON data (`~/.node-red/flows.js`) * .green[**Flows**] process .purple[**Messages**] (JSON) * Over 750 node modules available * plus a general-purpose **function** node * or write-yer-own nodes .img-640w[![Simplest flow example](simplest_flow.png)] ] ??? You work in Node-red by building programs called flows, which are a collection of nodes that receive messages, process them, and emit modified or entirely different messages. Each node is an NPM module that implements the node-red interface. The nodes themselves are amenable to unit testing using any of the node-js test frameworks. If you write your flows by grouping logic into libraries then you can write test stubs and drivers to unit-test your flows. And once again, all the visually constructed code exports into JSON for review and deployment. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ## With ] .right-column.tight.small[ ## (ab)Using .red[Node-RED] for Business Dashboarding * Search most SQL and NoSQL databases (eg. .yellow[**Elasticsearch**]) * Filter, Smooth, Report-by-Exception * Built in HTML .purple[**dashboard**] nodes (`node-red-dashboard`) * Push data to .green[**Blynk**] for iPhone/Android Apps .img-320h[![Dashboard UI](dashboarde.png)] .img-320h[![Server Error Dashboard](blynk_bi_errors.png)] ] ??? Here's an example of the kinds of dashboards you can build, on the left is Node-Red's inbuilt web dashboard, which displays quite well on everything from iphones to smart TVs. On the right is an aiphone app called Blynk which pulls data from a cloud or self-hosted server, to which my node-red flow pushes its output. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ## With ] .right-column.small.tight[ ## Blynk - Agile dashboarding for one user or a thousand * .blue[blynk.cc] - cloud data, or self-host * interactive dashboard editor * share an app (with data) or clone (separate data) * free for ad-hoc sharing, pay for app-store distribution .img-320h[![Blynk Widgets](blynk_widgets.png)] .img-320h[![Blynk Sharing](blynk_share.png)] .img-320h[![Blynk Cloning](blynk_clone.png)] ] ??? I want to tell you some more about blynk. This is another app that was created for use with IoT but which has really good applications for business dashboarding. Each dashboard is associated with a data store on either blynk's server or yours, and you can pass dashboards around by scanning or emailing QR codes. There's two ways to distribute an app, by sharing it where the layout is read only and all the users see the same data, or by cloning it where you get your own editable copy of the app. Sharing is how you ship dashboards to your users, and cloning is how you pass them through your pipeline as artifacts. --- ## Server Error Monitor .img-420h[![Server Error Monitor](bi_errors.png)] .img-420h[![Server Error Dashboard](blynk_bi_errors.png)] ??? This is a more complicated example, in fact it needs to be refactored into libraries, but in this case we were seeing unexplained spikes in error rate and we wanted something quick and expedient to show the error levels over the last few days and give a quick notification when a spike happened. In this case the flow not only provides a blynk dashboard, but will send instant notifications to the ops team when a problem is detected. --- .left-column[ #Tactics ## Who ## What ## When ## How ## Where ## With ] .right-column[ ## Social Media Integration * Receive feeds from twitter, facebook etc. * Recieve feeds from app stores * Post to slack, and monitor channels * Analyse, Summarise, Respond, Alert ] ??? The last thing I want to talk about today is social media monitoring. Node Red has good support for this, and you can use simple onboard sentiment analysis or push text out to IBM Watson for deeper analysis. One very simple possibility is that when you get people talking about you on social media, you send a heads up to your slack channels. It's a great way to keep in touch with your users. Maybe you want to forward your five star reviews to marketing for quoting. Or send the one-star reviews to support for followup. I've done a series of talks last year on things you can do with these tools, you can find links on my website. I encourage you to follow up or get in touch if you want to know more. --- .left-column[ # Agenda ## Who ## What ## When ## How ## Where ## With ## .red[Summary] ] .right-column.bulletsh4[ ## Summary #### Target your dashboards, keep them focused #### You can do a lot more than just display graphs #### Design your dashboards early #### Be Agile: experiment, build, ship, iterate. #### Dashboards are code #### Bring all your data together for synergy ] ??? All right, let's summarise what I've talked about. Your first step is to identify who your data is for and make sure it's in a form palatable to them. Think about new ways to use data beyond just graphs, and use that thought process to inform your software design. Treat your dashboards like code, review them, test them, demonstrate them and ship them with automated tools. Finally, synthesize a big picture from disparate sources. --- .left-column[ # Agenda ## Who ## What ## When ## How ## Where ## With ## .red[Summary] ] .right-column.small.bulletsh4[ ## Resources, Questions #### My BrisJS talks on Node-RED + Blynk in more depth: - Videos at [YouTube](https://www.youtube.com/watch?v=MoP8zH2hbnY&t=151s) (more to come) - Slides at http://christopher.biggs.id.au/ #### Node-RED - [nodered.org](https://node-red.org/) #### Blynk - [blynk.cc](https://blynk.cc/) #### Me - Christopher Biggs - Twitter: .blue[@unixbigot] - Email: .blue[christopher@biggs.id.au] - Slides, and getting my advice: http://christopher.biggs.id.au/ ] ??? Thanks for your time today, I'm happy to take questions in the few moments remaining and I'm here all week if you want to have a longer chat. Over to you.