Thursday, November 19, 2015

Technical Estimations: The Love-Hate Relationship Story

http://maginfo.com/wp-content/uploads/2015/11/AdobeStock_64170691-300x205.jpeg

Estimations Are Hard

 
Businessman creating a technical estimation

For software developers, creating a meaningful technical estimate is one of the most difficult things you will ever do on a regular basis. A good estimate must take into account so many (often vague) variables like the technologies being used, the scope of work and even the client themselves. On top of that,  any attempt to propose a solution based on a development-level understanding of a client’s needs often differs from what the client actually thinks they want (or need) in the first place, especially when they don’t have a technical background.

How developers create and present technical estimations also has consequences, both good and bad. Overestimation can scare a client and lead them to place a project on hold, or even find another firm to do the same job. Underestimation may initially cause the client to accept your proposal, but will put the integrity of your team and business in jeopardy. Unfortunately, even the smallest miscalculation can lead to confusion and with so much at stake, no wonder developers cringe at the thought of putting together a technical estimate. Now while an estimate is rarely perfect, there are still a number of ways to make estimates as accurate as possible.


 

Reaching A Common Understanding:



Without exception, when your dev team begins the estimation process, they’re naturally hesitant to put together a proposal broken down by days and hours on a piece of paper. Turning a request for proposal (RFP) into a meaningful estimate takes time, research and analysis; questions are also always bound to arise ranging from intended functionality to the true meaning of user stories. But this is good, developers should be asking questions!

Considering that most clients are not “technical individuals”, what I’ve noticed is that many clients view an estimate as an unchangeable oath written in blood. In contrast, developers tend to view an estimate as nothing more than the definition of the word itself, a rough calculation based on what is known at the time. That said, developers often change estimates as new information and requirements are discovered, often to the dismay of the client. While a software engineer may understand that even the smallest feature or change in functionality can impact the scope of an entire project, most product owners simply don’t see it that way.


Presentation Is Key: Knowing your Audience



Being able to distinguish between whether your client wants a rough order of magnitude (ROM) or a true request for proposal (RFP) can save a lot of time otherwise spent on unnecessary work. Depending on the project, various members of your team may also bring a different point of view (POV) that will impact the estimation approach being used. Believe me, product managers, developers, quality assurance and sales all see things a little differently. While this can be a good thing, having an understanding of your audience is arguably the biggest factor impacting how a technical estimate is made, presented and what level of accuracy is needed –  knowing whether your estimate is going to another engineer or to your average joe makes a huge difference here.

In the past, we’ve experimented a bit with different types of technical estimates and have found that other dev teams, technical individuals or companies requiring SAAS solution want the most comprehensive estimate you can give. On top of a comprehensive estimate, they want a list of technologies, the justification for these technologies, and an hour-by-hour breakdown on every individual task – these proposals took a stupid amount of time to create, but that’s what the client wanted, that’s who our audience was. On the flip side, some of our clients were small businesses, they did not have a technical background and simply wanted a web or mobile application. The estimation approach used for them consisted of 1-3 pages where we reiterate their needs and provided a summary of the sprints each broken down by hours. The difference in terms of time spent between these two types of estimates is sometimes multiple weeks or even months.

How you feel then creating a technical estimation Margin for Error:


Developers never make mistakes and everything is done correctly the first time around.. (said no one ever!). Even with the absolute best dev teams, not including a margin for error in a technical estimate can be disastrous. In a sense, if you’re not including some wiggle-room for reworks, you’re underestimating the total effort needed for completing your deliverables. When possible, give estimates as a range when it makes sense and be more precise when a range is not needed. Mockups are a great example here because clients tend to have their own idea of what looks good and rework is almost always necessary. However, once mockups are approved by the client, the margin for error contributing to rework is not nearly as high – taking this into account when making an estimate can significantly improve accuracy.

 

Example:



Phase 1.
Mockups & Design: 2-6 days

Description: At this step, Maginfo will prepare mockups and designs for the project. Upon completion, approval of mock-ups will be needed prior to moving to the next step of development.

 

Phase 2.
Development: 6 days

Description: At this step, Maginfo will implement the project based on approved mockups and specifications.


The Benefit of a Good Estimate:



For product owners and clients specifically, having an itemized breakdown of functionality at the granular level makes it easier to weigh and prioritize different aspects of the project and even put “nice-to-have” features on the backburner… At the end of the day, it’s way easier just to sit down, write some code and bill for the hours, but even developers understand that this is unrealistic from a client perspective. Nevertheless, reaching a mutual understanding between the client and the dev team regarding the scope of work should be the #1 priority before kickstarting any project.



 

The post Technical Estimations: The Love-Hate Relationship Story appeared first on Maginfo.

Thursday, November 12, 2015

Big Data: Reading Information from Sensors and Data Analytics

http://maginfo.com/wp-content/uploads/2015/11/AdobeStock_77007744-300x225.jpeg

Barcode scaner As technology becomes increasingly sophisticated, big data is being utilized by businesses across countless industries. The tools that are used are also becoming more advanced – and now sensors are playing a bigger role in data analytics. Defined as “the statistical analysis of data that is created by wired or wireless sensors,” sensor analytics can be used to help companies run more efficiently, better understand consumer needs, improve targeting efforts and much more.

The only problem is, how do you effectively read sensor data and interpret it in a way that’s useful? Here are some techniques.

 

Spotting Anomalies

Identifying events that don’t conform to typical patterns can be highly important in several circumstances. One example pertains to online security where an anomaly could indicate a network intrusion. This would help a company detect a security threat and disarm a situation before it becomes serious. Another example pertains to the healthcare industry where an anomaly in a medical diagnosis would inform doctors or nurses so they could quickly address the issue and potentially save the life of a patient.

Sensor data makes it possible to identify atypical events much quicker and more conveniently than in the past. By looking at a high volume of data where everything is more or less consistent, it becomes easy to spot anomalies, which can be useful in many ways.

 

Trends Detection

A big part of staying competitive in business is being able to spot trends and stay on the cutting edge. Wired or wireless sensors help streamline this process because they can be used to generate large volumes of data, which make it possible to spot overarching trends that might otherwise be difficult. In order to use sensor data in a practical way, businesses can examine data on the large scale and search for noticeable trends that could be indicators of patterns that could influence the approach they choose to take to operations.

 

Visualization

digital eye with security scanning conceptThere’s no doubt that humans are inherently visual creatures – and one of the best ways to make use of a large body of information is to utilize data visualization tools. While visualization can be used in numerous ways, one of the most valuable is for helping businesses better understand customer behavior and spot opportunities. Data Informed offers a good example:

“Business leaders for a supermarket chain can use data visualization to see that not only are customers spending more in its stores as macro-economics improve, but they are increasingly interested in purchasing ready-made foods.” When it comes to examining data that would otherwise be difficult to interpret, visualization makes it significantly easier and more intuitive.

By understanding how to effectively read information from sensors and data analytics, it minimizes any guesswork. In turn, businesses can take abstract information and transform it into something that’s much more concrete so that it can be used in a practical way. And when you consider the long-term implications, this can have a dramatic impact on operations and put companies in a better position to succeed.

 

The post Big Data: Reading Information from Sensors and Data Analytics appeared first on Maginfo.

Thursday, November 5, 2015

Predictive Analytics with Big Data

http://maginfo.com/wp-content/uploads/2015/10/AdobeStock_58475432-300x200.jpeg

Data mining concept - business woman writing virtual screen

Big data fuels predictive analytics because without adequate data it’s difficult for organizations to make accurate predictions about future events. Generally speaking, there is a correlation between a large volume of data and a high degree of accuracy and a smaller volume of data and a lower degree of accuracy. Of course this is assuming that an organization is utilizing best practices and effective data management techniques.

When this is the case, the larger the volume of data, the better – and as an organization accumulates more data, this allows it to make more accurate predictions and create actionable intelligence for the future.

 

When is Data Considered Big?

The term “big data” is often used in a broad sense and somewhat subjective. According to a popular big data study in 2011 by McKinsey & Company, it’s defined as “data sets whose size is beyond the ability of typical database software tools to capture, store, manage and analyze.” Some would say that once data gets in the gigabyte range that it’s considered big, while others would say that it’s petabytes. Regardless of what the precise definition may be, the more data an organization has, the better its decision-making typically becomes.  

 

AnalyticsExamples of Utilizing Predictive Analytics with Big Data

By having access to a large body of data such as previous customer purchases, buying patterns, etc., a business could use both stored and real-time information to its advantage when sending out promotions. For example, if a business knows that a group of customers have purchased a particular product, they could send out promotional materials featuring a similar product in which the customers would be highly likely to buy.

Another example pertains to website exploration. For instance, if a customer looked at site pages featuring a particular product or service, the company could provide a unique website experience that’s tailored to the customer’s specific interests.   

While having only a small body of data would probably allow some degree of accuracy in terms of gauging what customers are looking for, having a much large body of data should give the company a significantly higher level of certainty. The point is that big data is usually advantageous over small data because it helps organizations get the most out of predictive analytics.

 

Accuracy is Contingent Upon Data Quality

On a side note, it’s important mention that there are limitations to big data – and there’s a certain point where there’s so much data that it’s actually counterproductive and a hindrance. For big data to be effective, it’s critical that an organization utilizes some form of data management where it becomes properly organized and obsolete information is deleted once it’s no longer useful. Basically, big data must be “tamed” and structured in a way that it ensures that the information found via predictive analytics is legitimately helpful.

The bottom line is that predictive analytics used in conjunction with big data enables organizations to make sound decisions. And when best practices are utilized, this gives an organization a significant edge when predicting future events.  

The post Predictive Analytics with Big Data appeared first on Maginfo.