Quantcast
Channel: CARTO Blog
Viewing all 820 articles
Browse latest View live

The Quantified City: A Closer Look at Chicago's Array of Things

$
0
0

Data-tracking devices, from wearables to time and task management applications, have become a quotidian and normalized aspect of most Western societies. From Fitbits to sleep trackers – a question like, “How many steps have you taken today?” – is regularly asked by employers and friends over drinks at Friday happy hour.

The quantified self is not a new concept.

What is new is the idea of a quantified city. Cities are collecting massive quantities of data with the implementation of sensor-based technologies. However, most of that data, which is open, sits in silos rarely leveraged by agencies, departments, and organizations for the benefit of citizens.

Chicago, is the latest city to crack the syntax of smart cities and hack the Internet of Things. Instead of focusing on data to quantify individual productivity and activity, the city has partnered with researchers to analyze sensor-collected data to measure the City by the Lake’s “fitness.”

Dubbed the Array of Things (AoT), a network of interactive, modular sensors, is collecting new streams of data on environment, infrastructure, and activity. This hyper-local, open data can help researchers, city officials, and software developers study and address critical city challenges, such as flood prevention, traffic safety, air quality, and availability to civic services.

AoT
Source: http://arrayofthings.github.io/images/2016-diagram-lg.jpg

Chicago’s initiative is a perfect symbiosis of resources from the University of Chicago, Argonne National Laboratory and the School of the Art Institute of Chicago. The development and research groups collaborate with AT&T internet service providers and are backed by a $3.1 million grant from the National Science Foundation.

A projected 500 sensors will be installed along Chicago streets by the end of 2018.

Sensors
Source: http://arrayofthings.github.io/images/node-locations.png

Researchers working on the AoT believe collecting data – including readings of air quality and pollutants, light, noise, people’s movements, and other information – will help city officials better understand the urban environment. There are hopes that the data, which will be made publicly available through the city’s data portal, will lead to innovation, as well as better public services.

But beyond making cities more liveable and learning from an array of technologies, like the University of Chicago’s Urban Lab’s location application, the project integrates crowdsourced data, distributing the responsibility of livability onto all residents.

DataPortal
Source: https://data.cityofchicago.org/Environment-Sustainable-Development/Array-of-Things-Locations-Map/2dng-xkng

For Chicago, once completed, the Array of Things will make it a leader in smart city innovation. The programmable nodes are designed to host a variety of sensors and devices, providing an urban-scale testing ground for smart city technologies such as new communications or information systems.

“Urban sensing – collecting and using data from sensors in public urban spaces – is essential to the next generation of data science and to improving city service delivery,” said Brenna Berman, Chicago Department of Innovation and Technology Commissioner and Chief Information Officer.

The data from sensors is available in standardized form, accompanied by latitude and longitude information, affords communities and interest groups the opportunity to identify potential areas to improve the urban experience.

The AoT will be the central nervous system of cities.

Chicago’s use of location intelligence is just one way that IoT implementation is being produced and grounded in different contexts, proving the ways that the concept travels and evolves.

To learn more about how location intelligence complements existing municipal infrastructure, the Array of Things, IoT technologies, and civic technology research, sign up for the CARTO 5, a bi-weekly curation of the best in location analysis and intelligence. Want to know how you can build a city of the future? Check out our 4-Step Framework for Smarter Cities.

Happy Data Mapping!


Examining Potential Impact of H-1B Reform with Data Visualizations

$
0
0

On April 3, 2017, the U. S. Citizenship and Immigration Services (USCIS) will begin accepting 2018 applications for H-1B Visas. The H-1B program permits U.S. employers to temporarily hire foreign workers with specialized skills each year. In January, however, members of the House of Representatives introduced H.R.670: High-Skilled Integrity and Fairness Act of 2017, a bill proposing significant changes to the current H-1B application process.

H-1B opponents have accused the bill of outsourcing American jobs at lower salaries to immigrant workers. Yet, this argument overlooks the fact that large segments of the American workforce lack necessary skills, and especially tech skills. The tech industry, which accounts for 12 percent of all jobs in the United States, is likely to feel the effects of H-1B reform immediately as H.R.670 would impose restrictions on recruitment and retainment of talent.

Let’s explore possible ramifications for the tech industry, and the American economy more generally, should H.R.670 go into effect. Using open government data, spatial analysis, and data visualizations let’s see what these changes may mean in the not so distant future.

The Current H-1B Application Process

To hire foreign workers through the H-1B program, prospective employers are required to submit a Labor Condition Application (LCA), and must designate themselves as either an H-1B Independent Employer or as an H-1B Dependent Employer. An employer’s status is based upon a specific fraction that considers a company’s size and the number of workers already employed on H-1B permit. One such fraction, for example, designates dependent status to a company employing 50 workers or more where at least 15 percent of those employees are on temporary work permits.

H-1B Dependent Employer applications are subject to extra scrutiny while under review by the Department of Labor, an added obstacle perhaps deterring employers from applying as a Dependent Employer. But this scrutiny can be avoided in one of two ways:

A pay wage exemption for applicants earning $60,000 or more each year.

or

An education exemption for applicants holding advanced degree (master’s or above) from U.S. institution of higher education.

Crucially, these exemptions level the playing field for both employer types. Of the total 85,000 visas granted each year, that is, 20,000 are reserved for applicants meeting the education exemption requirements irrespective of employer type. Given the pay wage exemption, then, the probability of visa allocation for both H-1B Dependent and Independent Employers becomes the same, which yielded a probability of 33.33 percent last year presuming equal odds for the 236,000 applicants applying for one of the 85,000 visas available through the current lottery system.

The data visualization below is built with open data provided by the U.S. Department of Labor on H-1B visas applications received for fiscal year 2017, which totaled nearly 236,000.

This proportional symbol map, whose circles are proportionally sized to represent difference between number of visas applied for, by H-1B Dependent Employers (Red) and Independent Employers (Blue). The widget panel on the right hand side displays the mean average salary paid to employees on work permits for each employer type. H-1B Dependent Employers paid employees an average mean salary of $73,000 whereas H-1B Independent Employers paid an average mean salary of $89,000, which seems to confirm the suspicions of critics regarding lower wages.

Now, assuming equal statistical probability, let’s see how the H-1B Dependent Employer exemptions play out within the current process in the data visualization below:

From the congressionally approved cap of 85,000 H-1B visas for fiscal year 2017, the lottery system allocated approximately 40,000 to Dependent Employer applicants and the remaining 45,000 to Independent Employer applicants.

The lottery-based system’s random selection seems to promise fairness to applicants irrespective of employer type, which would ideally calm opponent’s suspicions regarding Dependent Employer applications.

And yet, H.R.670 would target H-1B Dependent Employers specifically, while also replacing the current lottery system with a merit-based system based upon market-value pay allocation.

Potential Consequences of the High-Skilled Integrity and Fairness Act of 2017

H.R.670 proposes replacing the lottery system with a metric system where decisions would be determined on employee’s merit, which the bill presumes would be reflected in annual pay wage reflecting market-based allocation. In addition, H.R.670 would change the Dependent Employer exemption in the following ways: First, the education exemption would be eliminated, which will lead to a significant decrease in the number of H-1B Dependent Employer visas and an increase in H-1B Independent Employer applications. Second, what the LCA terms the “prevailing wage” system exemption *for dependent employers* would be raised from $60,000 to $130,000.

The LCA’s current three tier “prevailing wage system” for all employers is also proposed to be readjusted to align annual wages for employees with specialized skill to market-based allocations for such positions. Although figures have yet to be revealed, analysts are confident that the minimum wage brackets for all professions would be raised, thus flooding this new bracket.

Subsequently, H-1B preference would be given to level three candidates with an annual salary equal to 200 percent (or more) of his or her American counterparts in that given profession. Each tier would follow suit as level two applicants would need to make 150 percent (or more) and level one applicants 100 percent (or more) in annual salaries than their respective American counterparts. The implicit logic behind H.R.670’s proposal to increase the pay wage exemption is that these above-market salaries demonstrate the invaluable expertise that an H-1B applicant offers, which, ironically enough, is what is now being valued according to market-based allocations.

Given the current level-three wages for all professions, a negligible fraction of applicants will meet either the 200 percent or 150 percent pay increase thresholds. As such, nearly all applicants will fall within a 100 percent of level-three wages.

Let’s take a look at what this could mean in the data visualization below.

Observations

H.R.670 changes would restrict H-1B Dependent Employer eligibility to such an extent that only 928 visas would be issued. The H-1B Dependent Employer workforce would immediately decrease by nearly 39,000. These figures are based upon available open data and are liable to change in the near future, but nevertheless the information is alarming. The New York Times, for instance, reports that the recent fervor for immigration reform, of which this is a part, aptly titled the “Trump effect,” has yielded a 40 percent decrease in international student applications to study at American universities.

As H.R.670 congressional debates near and as this type of decline spreads beyond academia in response, we are left with far more questions than answers: Will USCIS release open data for H-1B visa recipients post-2012? What will the proposed reforms to LCA’s “prevailing wage system” look like in terms of dollars and cents? How will market-based figures for LCA “prevailing wage system” be determined? And, more over, how can these figures account for economic fluctuation within a globalized world? What measures are being taken to (re)train American workers in tech skills that will be needed should the bill pass? Cognizant Technology Solutions, as found in the data visualization above, could lose nearly 9,800 employees whereas Deloitte Consulting would gain close to 8,700 employees given my statistical assumptions.

The data visualization below reflects what these fluctuations could look like should the H-1B process change.

Observations

Companies will need to reassess allocation of resource because of external disruption. Although H.R.670 assumes that these measures will return the American workforce to greatness, but due to external pressures imposed within tight time frame companies will need to make decisions in company’s best interest to keep business afloat.

Are there any viable alternatives?

Since recruitment of new talent would be severely restricted following H.R.670 reforms, are there any options for companies to even retain current workers?

Hypothetically speaking, if companies were to exempt employees from proposed H.R.670 restrictions by reallocating excess revenue, how much would the companies need to spend?

The widget panel for the data visualization above displays buckets for each employer, the annual amount of capital needed in order to retain current employees, and the total percentage of annual revenue that retaining H-1B employees would cost. For nearly all employers the prohibitive costs do not provide a viable solution. Cognizant Technology would need to raise 1.6 billion to retain its temporary foreign workers, which would be nearly 11.6 percent of its annual revenue.

Conclusion

Although the proposed H-1B reform bill has yet to be ratified, the preceding exploration has raised several questions that need answers and soon. H.R.670’s reforms will have long lasting effects that will be felt not only for temporary foreign workers whose lives will be disrupted, but also nearly every aspect of the American economy that deals with the tech industry.

So as the H-1B visa application pool opens in less than a week, let’s demand more open data, and government transparency more generally, from elected officials before it’s too late.

Happy Spatial Analysis

Four Spatial Analysis Techniques to Boost Outcomes in 2017

$
0
0

The debut of Apple’s iPhone in 2007 ushered in a location data revolution. The exponential rise in mobile devices, crowdsourcing applications, and smart gadgets connected within the Internet of Things over the last decade, however, has generated an unprecedented amount of location data leading to what some are calling Location Data 2.0.

The ability to transform location data into business outcomes has become a new litmus test within our data-driven economy, and companies are employing Location Intelligence tools, like spatial analysis, to pass this test while also surpassing the competition.

Javier de la Torre, our CEO, recently hosted a webinar examining the new role of location data and its impact within the business community.

The availability of and access to location data, Javier explained, has alerted businesses to new optimization strategies and has altered the foundation upon which businesses are modelled. Uber, for instance, could not offer Uber Pool without customer location data.

During the webinar, Javier highlighted four types of spatial analysis expected to gain popularity in 2017. Let’s take a look at how each technique can transform location data into business outcomes.

1. Spatial Optimization for Dynamic Routing

A driving force behind many data implementation initiatives is the promise of operational “optimization.” This dimension is constantly on the move, which has caused problems for analysts measuring its success.

Businesses applying spatial optimization to location data, however, are relocating the meaning of “optimization.”

Waste management companies, for instance, are benefiting from the abundance of available location data, especially within smart cities, in an attempt to optimize resource allocation.

Whether managing fleet deployment or collection frequency, spatial optimization can be applied to location data such as waste volume measurements along routes to determine collection frequency.

2. Linear Programming for Constrained Optimization

Waste management optimization also requires taking into account city-specific constraints, which can be accomplished with Linear programming.

This spatial analysis technique accounts for contextual conditions to determine the optimum outcomes given site-specific constraints.

Returning to the example of waste management, managers can use linear programming to create route assignments that reduce collection frequency, and by extension, fuel costs.

At the same time, linear programming factors into the equation external constraints, such as waste disposal facilities. In order to reduce collection frequency, waste collection services must ensure that disposal facilities are not overloaded on any given day.

Did you miss the webinar? Watch it anytime on-demand!

Download

3. Gradient Boosting Models and Machine Learning

The latest turn in the location data revolution is shifting toward gradient boosting and machine learning to generate optimization algorithms for future price predictions.

In March 2017, Foursquare’s CEO, Jeff Glueck, introduced Foursquare Analytics, a dashboard promising to put the company’s proven power of “location intelligence in the hands of brands.”

Aspiring to emulate the surprising success of retailer T. J. Maxx, Foursquare Analytics measures “chain-level foot-traffic performance.”

In the clip below, Javier discusses how the business model of Foursquare Analytics is predicated upon large volumes of location data provided from user “check-ins.”

More specifically, Javier examines Foursquare’s analysis of fast-food chain Chipotle following last year’s E. coli contamination outbreak, and discusses the difference in forming predictions based on sales data versus in-store foot-traffic. Take a look:

Although the year-over-year decline in sales showed little changes, in-store customer foot-traffic showed signs of improvement in Q1 of 2017.

As such, location data seems to offer a more robust dimension for both measuring chain-level performance as well as predicting future trends and patterns.

4. Indoor Analytics

That Foursquare turned toward location intelligence after studying T. J. Maxx’s success is not necessarily surprising given that retailers have been increasing revenue for some time with location intelligence.

Indoor analytics, like machine learning, build gradient boosting models with location data not only to predict in-store traffic more accurately, but also to understand customer segments and behavioral patterns more clearly.

Indoor analytics can reduce overhead spending as data visualizations, featuring custom basemaps for individual stores, can spotlight service gaps in high-traffic areas requiring a reallocation of staff resources.

Conclusion

June 29, 2017 will mark the ten year anniversary of the iPhone, and the location data revolution more generally.

The business landscape has changed in the intervening ten years, and will continue to do so as more industries start transforming location data into business outcomes through the tools and techniques of Location Intelligence.

Uber and Foursquare are cornering a market predicated on leveraging and licensing location data, and expect further changes in 2017 to traditional industries ranging from waste management, retail, and even fast-food.

Whether applying spatial analysis to location data to improve optimization or prediction depends upon the “where factor,” and Location Intelligence provides the wherewithal to businesses joining the revolution.

Happy Data Mapping!

4 Ways Data Enrichment Can Improve Your Raw Business Data

$
0
0

As massive amounts of data are stockpiled in databases and CRM platforms, executives are turning to data analysts to make sense of it all. But why are so many businesses still struggling to turn location data into profitable outcomes?

It’s not you, it’s your data.

Implementing data-driven decision making can only show a return on investment for businesses if analysts ask the right questions with the right data. But analysts often find themselves working with incomplete, inconsistent, or even incorrect data from which few, if any, actionable insights can be discovered.

Data enrichment helps solve this problem (and it happens to be the first step in our Location Intelligence methodology).

Data enrichment is a process that enhances, refines or otherwise augments existing data, typically with imported datasets.

By enriching raw data with supplemental datasets and advanced spatial analysis techniques, analysts generate location-specific information wherein data-driven solutions to challenging business problems can be discovered. Here are four ways you can get started enriching your data:

1. Geocode inconsistent data

Geocoding is one of the first steps any business can take to enrich their data. If you have any type of address data, then geocoding can help render numerical and categorical data into latitude and longitude coordinates.

The added value here is the standardization geocoding provides analysts who can now conduct more in depth visualizations and analyses in a fraction of the time. (Plus, it’s a lot easier to visualize coordinates when you get to stage three of the Location Intelligence method).

Pro-tip: Don’t have physical address data? Try geocoding IP addresses instead!

2. Augment with routing information

If you’re looking to calculate route optimization or knowing the distance between two points would be helpful in your data analysis, then you’ll want to investigate enriching your data with routing. Talk to your LI provider about the best way to enrich with routing, or check out tools like OpenStreetMap (and CARTO).

3. Enrich point data with areas of influence analysis

Enriching your data with an area of influence analysis involves creating “isolines,” contoured lines that display equally calculated levels over a given surface area. This enables you to view different polygons calculating the travel time from one location to another within that polygon.

Imagine you have a variety of stores and want to see who lives within a 15-minute walk of those stores. Creating an area of influence will transform your point data into polygon data, which helps more precisely define your area of influence. Take a look at this demo using subway stops and demographic data.

4. Incorporate demographic measures

Enriching data with spatial boundaries is important, but adding demographic measurements can also help analysts get a better picture of their customer or target audience.

Population demographics are often provided by importing census tract data from a curated catalog. Popular measures for customer discovery tend to include: age, race, gender, education level, occupation, average income, and political party affiliation to name a few.

Conclusion

Whether you’re discovering consumer behaviors and trends from past transactions, accelerating time to purchase within the buyer’s journey, or designing marketing campaigns tailored to specific customer types, data enrichment is an important first step in your overall Location Intelligence strategy.

Have data enrichment hacks of your own? Share them with us on Twitter, Facebook, or LinkedIn page!

Happy Data Enriching

3 Businesses See More Green After Going Green

$
0
0

Earth Day, an annual event rallying support for environmental protection, will take place this Saturday, and marks the event’s forty-seventh anniversary. A growing environmentally-conscious consumerism among millennials has helped support green initiatives as more and more businesses“go green.”

CARTO is thrilled to see this business trend as it aligns with our mission to advance sustainability and help communities better prepare for climate change with data-driven solutions.

In honor of Earth Day 2017, check out these projects that put Location Intelligence in action to act against climate change.

1. Vizonomy’s Climate Risk Platform

Vizonomy, a global design and technology consultancy firm, built a Climate Risk Platform to assess the potential impact of climate change. Incredibly, this open source software used open data to help local governments lower costs associated with risk analysis by 80 percent.

Read more about location data’s role in efforts to combat climate change in our case study on Vizonomy!

Download

Check out these three projects for more ways location intelligence performs risk assessments!

2. Global Forest Watch

Looking to attract eco-friendly customers? Then check out Global Forest Watch (GWF), a deforestation initiative that can help businesses with location planning as environmental insights can be found with the click of a mouse. The use of location data can provide your business with information on forest change or nearby land conservation, which can have an impact on your daily operations.

Check out these three projects for more ways location intelligence supports forest conservation!

3. Simple Water

Filtering systems have replaced office water coolers, and for good reasons. But just how clean is your office water? Well, you can find out thanks to SimpleWater, a company using spatial analysis and location data to identify sources of water pollution. This service provides smart water testing that includes personalized health analysis and treatment recommendations. Promoting water quality is a great way to attract new business.

Check out these three projects using location data to support access to clean water!

Conclusion

These are just a few examples of how location intelligence can transform location data into outcomes for environmentally-conscious businesses.

Have a favorite example we missed? Then let us know on Twitter, Facebook, and LinkedIn!

Happy Earth Day!

The Best Conferences for Location Intelligence in 2017

$
0
0

Location intelligence (LI) is becoming increasingly vital to a wide variety of industries, but it can still be hard to find conferences and events where you can get smart on this growing sector.

Never fear! We’ve picked some of the best conferences around the globe to learn about location intelligence and the business applications for this emerging technology. Don’t see a conference you expected to see? Just shoot us a tweet @CARTO or email marketing@carto.com and we’d be happy to add it to the list. Read on, and get ready to spend your professional development dollars wisely.

CARTO Locations

April 26-27, 2017 in Madrid, Spain

Our own CARTO Locations is a two-day summit exploring the world of location intelligence. The summit features speakers and panel leaders from the fields of analytics, data visualization, geospatial analysis, and data science.

2017 Center for Geographic Analysis Conference: The Drone Revolution in Spatial Analysis

April 27-28, 2017 in Boston, MA

This conference is specific to the impact drones have had on mapping. You can view the full program schedule here, which includes sessions on the ethical and legal issues of drones and software options.

GISTAM 2017: The International Conference on Geographical Information Systems Theory, Applications and Management

April 27-28, 2017 in Porto, Portugal

This conference covers GIS industry topics in great detail, with five main focus areas: data acquisition and processing; remote sensing; modeling, representation, and visualization; knowledge extraction and management; and domain applications.

MLconf: The Machine Learning Conference

May 2017 to November 2017, multiple locations

The Machine Learning Conference takes place at multiple locations throughout the year, including Atlanta, San Francisco, and Seattle. While the schedule for some locations has not been posted, you can view details from the most recent one in New York to learn how LI typically plays a part.

Strata Data Conference

May 22-25, 2017 in London, UK

The general theme of this conference is “make data work.” Several of the sessions mentioned would appeal to LI interests, including one on creating interactive maps.

Deep Learning Summit

May 25-26, 2017 in Boston, MA

The third annual Deep Learning Summit focuses on artificial intelligence, but includes at least one session that speaks directly to LI: “Scaling Deep Learning Models to High Resolution Satellite Image Classification on the NASA Earth Exchange Platform.”

Geo IoT World 2017

June 6-8, 2017 in Brussels, Belgium

With a tagline like “Where Geolocation powers IoT innovation,” people interested in LI are sure to feel right at home. The full schedule isn’t yet available, but you can view the pre-program details here.

Smart Cities Connect Conference and Expo

June 25-28, 2017 in Austin, TX

The Smart Cities Connect Conference brings together the leaders of more than 200 progressive cities. The schedule is broken down into five sections, including “Network and Data,” which has several sessions connected to LI.

ICC 2017: International Cartographic Conference

July 2-7, 2017 in Washington, D.C.

Though the bulk of this conference is dedicated to the art of cartography, there is also a significant emphasis on the technology advances in mapping. While a detailed schedule isn’t yet available, you can view the schedule at-a-glance here.

Esri User Conference

July 10-14, 2017 in San Diego, California

Serving the global GIS user community, this conference has multiple components on the schedule and promises 300 moderated sessions and 450 hours of technical training. While not as cutting edge as it used to be, this is still a worthwhile conference for those looking to specialize in GIS.

SciPy 2017: Scientific Computing with Python

July 10-16, 2017 in Austin, TX

While the schedule for this conference hasn’t yet been released, several of last year’s sessions are applicable to LI industry professionals, including this one and this one.

ICGSAM 2017: International Conference on Geoinformatics and Spatial Analysis Methods

August 7-8, 2017 in Amsterdam, The Netherlands

This conference brings together leading scientists, researchers, and scholars in the geoinformatics and spatial analysis industries. You can view a full list of the presentations here.

FOSS4G

August 14-19, 2017 in Boston, MA

The annual FOSS4G conference is the largest global gathering focused on open source geospatial software. FOSS4G brings together developers, users, decision-makers and observers from a broad spectrum of organizations and fields of operation.

GIS and Remote Sensing Conference

October 2-3, 2017 in Vienna, Austria

The theme for this conference is “Innovation of Spatial Data Infrastructure for Sustainable Development.” Topics such as remote sensing in urban environments and Global Navigation Satellite System (GNSS) are among the major scientific sessions offered.

Smart Cities Week

October 3-5, 2017 in Washington, DC

While there hasn’t yet been an official schedule released, this conference promises cutting edge technology for professionals in the smart city industry.

State of the Map US

October 19-22, 2017 in Boulder, CO

SOTMUS provides the opportunity to connect with other mappers, businesses, government agencies, and nonprofits, all collaborating around the free and editable map of the world. Particularly good for people who want to learn how to work with OpenStreetMap data and hack on the latest mapping improvements.

GIS-Pro 2017

October 23-26 2017 in Jacksonville, Florida

Hosted by the Urban and Regional Information Systems Association, this is the 55th annual GIS-Pro conference. This year’s schedule hasn’t yet been released, but you can still view last year’s here.

ACM SigSpatial 2017: International Conference on Advances in Geographic Information Systems

November 7-10, 2017 in Redondo Beach, CA

Billed as “the premier annual event” for the ACM Special Interest Group on Spatial Information, this conference seeks to cover all aspects of GIS. View last year’s schedule here.

LBS 2018: ICA Commission on Location-Based Services Conference

January 15-17, 2018 in Zurich, Switzerland

While not scheduled until 2018, this conference brings together scholars from numerous disciplines to discuss the influence of location-based services. More information is expected to be released as the dates get closer.

International LiDAR Mapping Forum

February 5-7, 2018 in Denver Colorado

This conference is also not scheduled until next year, but promises to bring together LiDAR and 3D geospatial data collection professionals to discuss changes in the industry. You can view last year’s conference program here.

Don’t see a conference you expected to see? Just shoot us a tweet @CARTO or email marketing@carto.com and we’d be happy to add it to the list.

Q&A: A Look at NYC's Open Data Approach with Mayor's Senior Project Manager

$
0
0

Open data is justifying all its hype by making local governance easier, more accessible, and transparent for the daily lives of small town and big city residents alike.

Jeff Ferzoco and Adrienne Schmoeker recently sat down to discuss the rise of open data communities, the significance of big data, location data analysis, data contextualization, and just how New York City’s Open Data Law is closing the door on disempowered civic solutions.

Jeff Ferzoco is our Public Sector Customer Success Manager here at CARTO. He specializes in how cities and local governments can use open data and location intelligence to better serve their citizens. Adrienne Schmoeker is Senior Project Manager for Open Innovation Initiatives in the New York City Mayor’s Office for Technology and Innovation.

Watch the full conversation or check out a summary of some of the key questions they covered below:

FERZOCO: What is open data?

SCHMOEKER: Open data is data that’s collected by a city and shared back with its constituents. It may be fed in by the citizens themselves—for example, through New York City’s 311 information number.

The information collected there is anonymized and then made available on the city’s open-data portal. Or the data might be gathered in other ways. For example, the city’s snowplow trucks have sensors that detect which streets have been plowed.

We publish that information back and give it to New Yorkers so that they can keep us accountable.

FERZOCO: What is the value of open data to cities?

SCHMOEKER: The promise of open data is the promise of more participatory government.

People have more information about where they live and how their government operates, and they can then take action.

FERZOCO: How has the value of big data changed over time?

SCHMOEKER: Our data inventory has grown enormously. When we started about seven years ago, we had about 160 data sets; we’re now up to over 1,600, including 200 automated sets.

In addition to growing supply, we are also working on how to market our data. The percentage of New Yorkers who use open data is still pretty small.

So how do we truly make it for everyone? First, we revamped the website, adding a tutorial and a more prominent search feature. We also hosted our first Open Data Week, which engaged some 900 people over 12 events.

FERZOCO: Who is the audience that you’re trying to reach and what has been their response and reception?

SCHMOEKER: Our audience is: Everyone.

The great irony of open data is that we don’t yet have great data about who uses it and how. So we’re in a sponge phase of doing a lot of research and listening.

There are so many use cases and applications that people are coming to us with. Physicians, for example, are interested in using open data in their health practices. We are also reaching out to educators, data journalists, and nonprofit organizations.

But ultimately, it’s the public. Anyone can request a data set. And you don’t even to be a New York resident.

FERZOCO: How are agencies adjusting to the upcoming open data publication deadlines?

SCHMOEKER: In New York, open data is a law, thankfully, and it’s acting as a big stick that says: “You have to publish all of your data sets.”

We coordinate with every agency, each of which has an open data coordinator. Two agencies that are really doing a particularly great job of it: The Parks Department and the Taxi and Limousine Commission (TLC).

The important thing is to encourage agencies to operate in a more data-driven way and to advocate for more resources to build out analytics for their agency.

What needs to happen is a change of culture: this data belongs to New Yorkers, they deserve to understand it, so let’s put it out there.

FERZOCO: Do you have advice for smaller cities that don’t have open data policies?

SCHMOEKER: If you don’t have a “stick” in the form of a law, then you need a crop of carrots. You can start by identifying the champions within your government who are excited about open data.

Give that agency the resources and moral support to get it off the ground in their agency, and then put them on the highest pedestal you can find. Showcase them locally, nationally, in conferences, on social media.

This will get other people to start catching on, and get people in your community to start demanding their right to open data.

SCHMOEKER: One thing that’s inspiring to me is the civic tech movement that is springing up across the US.

Groups like the Code for America Brigades or Meetup.com are helping like-minded people find each other.

In New York City alone, there are a least a hundred meetups for tech-minded people. Find ways to tap into communities with the skillsets and passion to use the data, and leverage that.

It’s amazing what people are willing to do for a simple thank-you.

FERZOCO: Is it, or should it be the responsibility of agencies to provide insight on the data they publish?

SCHMOEKER: With any data set, you have to understand the context of how the data is collected in order to do a meaningful analysis. And by placing data on a portal, you’re divorcing it from that context.

We try to solve that on our site by providing an introductory page for each set with basic information, data dictionaries, and additional context.

Mapping and visualizations are incredible tools for providing that context.

FERZOCO: Is the city considering having nonprofits not only use open data, but publish their own data?

SCHMOEKER: I would love that.

I don’t have a specific timeline for it, but the ethos of open data is collecting as much information as possible, and the city doesn’t have it all.

We also work with the Port Authority, the State of New York, the Department of Education, and the nonprofits that are doing amazing work in collaboration with the city. Without their information, we’re painting an incomplete picture.

This Q&A is excerpted from a longer interview with Adrienne Schmoeker from the New York City Mayor’s Office of Technology and Innovation. To learn more about how open data is helping New York City, you can view the complete webinar here.

A Really Good Guide on Location Intelligence Implementation

$
0
0

Most organizations are replete with data and data sources. What to do with that data for better business outcomes is not simply a question of database management, but one of strategy, enrichment, visualization, analysis, and taking action.

Location Intelligence Workflow

Even with all the data businesses collect, there are still unsatisfactory answers to the important questions being asked. Extracting insights, and answering those questions, with Location Intelligence (LI) is the valuable bond that allows for effective data-driven decision making.

This guide can help the executive, data analyst, or enterprising associate, with a cache of location data, curiosity to explore it, and a need to communicate data-driven business solutions.

Location Intelligence Workflow

0. Strategize

Just like pressing “0” in an European elevator to go to the ground floor, your business strategy is the ground floor of the Location Intelligence methodology.

Building a strategy before actually handling data will assist in a focused and effective implementation of LI for your overall outcomes.

Start with a clear understanding of goals, roles, responsibilities, executive expectations, and the channels of communication available.

A) Aces in Their Places

First, assemble a LI team within your organization. This cohort is responsible for implementation and execution of LI across the company. It should be cross-functional and include strategists, data and business analysts, a marketing representative or customer relations manager (to deliver results externally, if necessary), and be overseen by an executive-level employee.

If your company already has a GIS department you may want to consult it. However, LI does not require any specific GIS expertise.

Next, assign a project manager or point-person. The project manager will be responsible for documenting strategy, providing regular communication to external stakeholders, and ensuring the entire team is on target to reach the outlined LI goals and objectives.

B) Set Goals & Objectives

A goal is a broad outcome that encompasses the desired business result or achievement, while an objective is a measurable action taken towards reaching that goal. Now is a good time to document the goals and objectives for this project.

We recommend using, the often employed, SMART framework to determine goals. The SMART framework guarantees that your goals are:

  • (S)pecific
  • (M)easurable
  • (A)ttainable
  • (R)elevant
  • (T)imely

The objectives will function as indicators that measure the progress toward the overall LI goal(s). Objectives can be comprised of quantitative or qualitative criteria, such as the amount of increased market spending for a specific customer segment or the efforts and activities performed to improve customer satisfaction.

Try and revisit your goals and objectives once a week to ensure the team is moving in the right direction. This may mean, overtime, original goals and objectives are modified. Be sure to document any emendations in a single place that all team members have access to.

CARTO Asia

Project Templates

A Really Good Guide on Location Intelligence Implementation

Access on Google Sheets

C) The Initial Business Question

Location Intelligence really starts with a question: What opportunity or challenge is my organization trying to solve?

When beginning a comprehensive Location Intelligence plan, it’s common to focus on all the collected data. Try not to do this! While the gigabytes of information might be enormous, excellent data analysis never starts with the dataset – it starts with questions. Why was the data collected, what’s interesting about it, and what stories can it tell?

It’s also typical to propose questions like, “I want to know what’s in the data,” or “I want to know what the data means.” Sure, but what is meaningful?

If you’re lucky, your company has effectively communicated company-wide initiatives and challenges. The LI team can easily align efforts and potential questions with the overall company strategy. However, if your situation is less advantageous, conduct a sprint or scrum-like brainstorming session with the executive team to inform and jumpstart your business questions.

A good and appropriate question might be something like: Where is the best place to target my marketing efforts to reach my most engaged customers? The more specific your question, the more precise and clear the visual result will be. When questions are too general and broad, (i.e. “exploratory data analysis”), the results and outcomes will be generic and often only understood by those who are versed in the data.

Ultimately, the proposed questions should be revisited, refined, and thought over (and under) based on insights learned from each of the different stages of LI implementation.

D) The Location Intelligence Ecosystem

The successful implementation of Location Intelligence often depends on an organization’s support infrastructure. To reinforce the effectiveness and prosperity of LI, leverage multi-departmental expertise. Identify existing knowledge bases, portals, or intranets for communicating findings, as well as any visualizations and models. The idea is to create a regular and recurring feedback loop to guide efforts and decrease time-to-insight. Make sure there are clear and open channels of communication to reinforce location data insights.

LI should not be seen as a separate business undertaking, but as an instrument to inform and influence decisions company-wide.

Location Intelligence Workflow

1. Enrich

Applying Location Intelligence can only provide a return on investment if the proper questions are asked, using the most precise sets of data. Organizations often work with incomplete, inconsistent, or even (gasp!) incorrect data, from which few, if any, actionable and accurate insights are discovered.

Through data enrichment, internal datasets are made more useful and easier to manipulate and represent spatially. Data enrichment is a process that enhances, refines, and augments existing data from imported datasets.

We recommend enriching your data with the following steps.

A) Conduct a Business Data Audit

Before identifying potential external sources of new data, complete an audit of your existing business data. The audit should provide a visible scope of the type(s) of data readily accessible to you and the Location Intelligence team.

It can also be helpful to explore current datasets and repositories while asking these questions:

  • What type of data is the organization collecting?
  • What is the key performance metric being sought with the collected data?
  • Where are existing data repositories?
  • Can the location intelligence team access these repositories?
  • What type of file format is the data stored in?
  • Is data stored in geo-specific file formats like GeoJSON, KML, or Shapefile?
  • Are any location specific information attributes such as, physical addresses, postal codes, and area codes being collected?
  • Is the data geocoded or georeferenced?

PRO-TIP: On Postal Codes

While many businesses track data based on postal codes (i.e. zipcodes), a true LI-whiz wouldn’t consider using these geographically bounded numbers and letters as the ONLY basis for their business insights. Using postal codes as a determining data tracking measure limits the ability to go further with and perform the best analysis.

B) Standardize & Clean Your Data

Datasets cannot be properly visualized and analyzed if they are not normalized, structured, and uniform. Unstructured or “dirty” data can mean that an address was recorded manually, resulting in differentiations in string data, such as “St.” vs. “Street” or even blank cells.

To normalize your data, we recommend tools like Alteryx, Trifacta, Dataiku, Openrefine, FME, or Excel. It is always best practice to use a data normalization method that is compatible with your Location Intelligence platform. Additionally, some LI providers offer data cleaning services.

PRO-TIP: Take these questions into consideration when dealing with data

  • Is the data clean?
  • Are there null values in any cells?
  • Is the data relational?
  • Does it have consistent rows and columns per “location”?
  • Does the data conform to an internal style guide or data governance procedure?

C) Filter & Focus

Next determine what data is needed to produce your data visualizations and to perform the appropriate spatial analysis. For a more practical and malleable dataset, this will help reduce load times and other issues attributed to size and configuration of your datasets, remove any extraneous variables or entities that aren’t relevant to the underlying business question.

For example, to analyze trends, patterns, and outliers related to U.S. regional markets, removing data columns and rows attributed to international regions reduces the size and scope of your dataset.

D) Evaluate Additional Data Sources

Your team has made it this far (high five!) and the internal data is starting to take shape, now consider what other types of data can assist in a solution to your business problem.

Revisiting your original goals, objectives, and questions can help determine potential third party data enrichment sources. Fortunately, in addition to paid data sources, there are thousands of open data portals that can be accessed to augment business data. Check out this list of 40 Brilliant Open Data Projects for inspiration.

Important questions to consider when searching for additional data sources:

  • What demographic measures should be incorporated to enrich the data?
  • What sources and tools are most appropriate for this type of enrichment and analysis?
  • Can measures be refined to extract patterns and trends related to the consumer base?
Location Intelligence Workflow

2. Visualize Your Data

Up to this point, the previous steps have involved examining data at a granular level. From this view it can be extremely difficult to gain a “big picture” understanding of your location data. A platform that allows for easy data integration, as well as a myriad of visual displays, is the best solution in providing a global scope for enterprises.

At this stage, questions that arise may include: How should the Location Intelligence team interact with “live” data? How can the representation of data unravel over time?

Using dynamic animation to document the evolution of a dataset or interaction to control the time span, can be a useful function for understanding and interpreting datasets.

Ultimately choosing a data visualization method that best meets objectives is correlated with the location data needs. In LI the most important visual asset is the interactive maps.

A) Choose a Data Visualization

The process of choosing your data visualization is possibly the most important decision in a data visualization project. Deciding how to represent data can influence the previous steps, like what type of data is acquired, as well as subsequent processes, like the particular information you extract.

By now, most analysts and executives are accustomed to thinking of data as fixed values, but data isn’t static. It is important to consider how to represent data to adjust to new values. This is a necessity because most data comes from the real world, where absolutes don’t exist. The temperature changes, purchasing patterns shift, or a product launch causes consumer behavior to drastically change.

Key visualization components to consider:

  • What geo-spatial representation aligns with the established business challenge? Think about Basemaps and Map Styles.
  • Is this visualization appropriately representing my data? Am I choosing the right color schemes, informations windows and pop-ups?
  • Are data layers relaying information truthfully and accurately? How will interaction with my data depict scenarios and findings correctly?
  • Who is the audience for this visualization?

B) Identify Your Audience

A fundamental purpose of your data visualization is to be seen and shared. Sharing the data visualization means sharing insights and aids in the collaboration of meeting higher-level organizational goals and objectives. Taking audience into consideration will make the interpretation of data and maps more understandable and unambiguous.

PRO-TIP: Take these questions into consideration when deciding on the best location-data visualization

  • What are the goals and expectations of my audience?
  • What does the audience hope to learn?

Making a data visualization clear doesn’t mean assuming people using or viewing the map are idiots and require the “dumbing down” of the interface. It does mean valuing clarity and accuracy over ostentatiousness.

Location Intelligence Workflow

3. Analyze

Further clarifying location data representation is an essential step in data analysis. By calling more attention to particular data layers and establishing a hierarchy contributes to the readability of your Location Intelligence findings.

Adding interaction by granting the user control over the exploration of data, the selection of a subset of data, or the viewpoint can be the functionality that transforms insights into action.

As another example of a stage affecting an earlier part of the process, this stage can also affect refinement, as a change in viewpoint might require a redesign of the data distribution.

A) An Analysis Method That’s Just Right!

You’re almost there!

Now that a visualization has been generated, it can be used as a working model to analyze and iterate. Decide on the best initial analysis by revisiting the original questions, business challenges, and desired outcomes. What questions were posed and what is the right way to test those questions? This can get a little tricky. It’s helpful to consider some of the many possible analyses out there.

Possible analysis methods include:

  • Database Analysis: Data manipulation and filtering, numeric aggregations, and many other methods found in traditional BI tools.

  • Geospatial Analysis: Measure distance and proximity, count points in polygons, spatial manipulations and joins, as well as many other practical spatial functionalities found in GIS platforms.

  • Location Data Analysis: Detect clusters and outliers, predict market volatility, predict future customer patterns, and incorporate scientific modeling and machine learning.

Location Intelligence Workflow

4. Take Action

Now that you have enriched, visualized, and analyzed the data, it’s time to take that beautiful data visualization and make real change. With a clearly represented location data visualization and application constructed, it’s now possible to:

  • Make adjustments to the overall business strategy
  • Maintain a location application that updates and iterates new findings
  • Identify new objectives and business outcomes
  • Request additional resources
  • Share insights across departments
  • Collaborate with new stakeholders
  • Set new goals!

If you’ve made it this far you might be scratching your head. No, the methods for LI implementation are not new, but isolation within individual fields has prevented these methods from being used cooperatively.

Location Intelligence decentralizes individual disciplines and places the emphasis and focus on providing a new and contextual meaning to data, instead of the siloed perspective and tools of a particular field or department. Complex datasets can be accessed, explored, and analyzed by anyone in a way that simply was not possible in the past.

There are dozens of quick tools for developing data-visualizations in a cookie-cutter fashion in office programs, on the Web, and elsewhere, but complex datasets used for specialized applications require unique treatment.

Any data visualization tool used for generic purposes will produce generic displays, which ultimately produce generic AND disappointing results. This guide aims to help in the understanding of the value of location data as a tool for human decision-making - using Location Intelligence.

If you are interested in taking your strategy, plan, and analysis further signing up for a consultation with an LI expert is the starting point on your map for success.

Implement Location Intelligence in your business

Request a free consultation
CARTO Asia

Project Templates

A Really Good Guide on Location Intelligence Implementation

Access on Google Sheets

The Future of Location Intelligence

$
0
0

For two days, more than 250 leaders in Location Intelligence gathered at CARTO Locations in Madrid to share and learn new ways businesses are turning location data into outcomes.

CARTO Locations, our first customer-centric conference held last month, showcased Location Intelligence in action. A central message emerged from everyone attending the conference: everyone has location data, but not everyone has located location data’s value.

Team members, partners, and customers gave presentations illustrating location data in action in relation to some of the following tech topics:

  • Satellite Imagery
  • Remote-sensing
  • Geoalgorithms
  • Developing Apps with Mobile SDK
  • Machine Learning
  • Cloud-based Geoservices

If you missed the conferences (or want to refresh your memory), here are just a few takeaways coming out of CARTO Locations.

There’s more to analysis than prediction

Predictive analytics tends to dominate conversations regarding the value of data, and for good reason. But, Stuart Lynn, Head of Research and Data at CARTO, reminded conference participants of two other types of analysis that also help add value:

Optimization: The aim of this analysis is to make system more efficient by studying the geospatial distribution of data.

Inference: The aim of this analysis is to determine whether patterns within existing data are random or meaningfully related.

Optimization, inference, and prediction, Stuart explained, are changing the ways in which businesses can locate opportunities.

Traditional mapping boundaries are being redefined.

This increase in types of analysis has led to new ways of mapping areas based not on static census tracts, but rather on communities determined by smaller data patterns.

Elena Alfaro, Head of Data and Innovation at BBVA, gave a presentation on urban analytics demonstrating how the location data from credit card purchases yielded valuable insights into customer’s banking needs that would not have been discovered using static zip codes.

Telecommunication, social, and transactional data are great resources for businesses looking to expand as site-selection determinations can be made in regards to location of desired customer base.

Data might be a commodity, but it is not oil

As the demand for proprietary and third-party data grows, it should be remembered that this market is based upon an entirely new business model. Neither Uber nor Foursquare could exist without location data, a reality causing many to declare that data is the new oil.

Yes, location data is a commodity. No, location data is not the new oil.

This sentiment was heard in keynotes from both Javier de la Torre, who proclaimed 2017 to be “The Year of The Customer” for CARTO, and Sergio Álvarez, who mentioned upcoming product and design features that will help users work collaboratively with location data. Needless to say, both talks repeated the need to continue democratizing access to data.

The value of location data, as both David González, CTO and founder of Vizzuality, and Luan Jaupi, Head of Information Technology and GIS at HALO Trust reminded us during an inspiring panel, resides in its ability for environmental good. Global Forest Watch (GFW), for instance, is helping close the gap between available and useable location data. More specifically, the project’s intuitively designed interface enables complex information to be more easily conveyed to local communities by situating insights in a contextual location. As such, communities can undertake actions, such as resiliency planning, in order to confront risks in advance.

Learn how to put location data to work for the common good at upcoming webinar with David González on Thursday, June 22nd

Register Today!

Next steps

There is much more to say about the state of Location Intelligence after this remarkable conference. Check back soon as we begin uploading the sessions to CARTO Locations.

In the meantime, for all things Location Intelligence follow us on Twitter, LinkedIn, and Facebook.

Happy Data Mapping

4 Simple Steps Enigma Took to Turn Public Data into Insight

$
0
0

There is a nearly endless supply of open data ready for businesses and nonprofits to use. The challenge is that creating something useful with that data is not always easy. Open data comes in a variety of formats and doesn’t always make sense, especially when you’re trying to analyze data in a historical context.

But some companies are cracking the code with the help of location intelligence.

Last month, Enigma Labs launched the world’s first Sanctions Tracker, a website that visualizes and contextualizes changes to the US sanctions program. Imposing sanctions is an integral function of US foreign policy—and one that is driven heavily by executive action.

Enigma’s aim was to provide transparency into not only how the government relies on sanctions to shape policy, but how President Trump’s actions compare with those of presidents past.

Enigma is an operational data management and intelligence company. They work to empower people to interpret and improve the world around them. They deliver on that ambitious goal by placing data into the context of the real world and making it connected, open, and actionable.

Like lots of open data (and big data in general), sanctions data was only available in a raw format, which did not allow for interpretation, analysis, or contextualization.

Enigma applied the 4-step Location Intelligence process to take thousands of data points, enrich them, visualize them, analyse them, and enable users to take action by exploring how sanctions have evolved based on geography, administration, and relation to geopolitical events. They uncovered some interesting insights:

  • Sanctions programs for specific countries are global in nature. For example, the North Korea sanctions programs includes sanctions for entities in multiple countries around the world.
  • There is an interesting intersection of programs in specific localities. For example, Moscow currently has sanctions from 4 out of 5 of the programs depicted.
  • A number of sanctioned entities have addresses within the United States. People wouldn’t typically think that the Iran or Terrorism sanctions programs would include entities within the United States.

If you’re working with a large dataset and want to turn it into something useful, read below to see how Enigma applied location intelligence to their challenge:

Step 1: They got the data ready.

The Office of Foreign Assets Control (OFAC) publishes a wealth of historical sanctions records dating back to 1994. Unfortunately, they aren’t published in a format ready for analysis.

Like lots of open data, the OFAC data comes in giant blocks of text that must be parsed for unique identifiers like place of birth, passport & national id number, aliases, and addresses, which in turn must be cleanly formatted, geocoded, and deduplicated.

Enigma then aggregated the 69 sanctions programs into 33 more general categories to more cleanly visualize the larger historical trends at work.

Key to their success was working with subject matter experts to make sense of historical data. For example, OFAC has a list of current sanctions program but does not provide a list of historical ones. Enigma validated their data cleaning each step of the way with a subject matter expert to make sure they were making correct decisions.

Step 2: They visualized the data on a map.

Sanctions data had never been visualized spatially, though sanctions and compliance experts all recognize the global nature of sanctions programs.

Sanctions Tracker Map

It became clear that in order to derive real insight from this dataset, Enigma had to take a spatial approach to visualizing the data.

The visualization they chose shows 5 sanctions program over the past 23 years, each labeled with a distinct color on a dark matter basemap. This clearly shows the networked nature of sanctioned entities and the global nature of the sanctions program.

Step 3: They analyzed the data spatially.

Enigma conducted plenty of non-spatial analyses on their data, but some of the key findings were only apparent looking at the data through a spatial lens.

Enigma was looking at changes in a few critical programs over time. The map was key to demonstrating trends in geographic dispersion of sanctions and the time-based map makes very clear the growth in the size of the sanctions list over the past 20 years. The insights are endless, but a few key ones that popped out:

  • The North Korean and Iranian sanctions programs are truly global in nature. They include entities on multiple continents.
  • There is a visible shift in focus of the narcotics trafficking sanctions program from Colombia to Central America and Mexico.
  • You can see a sudden onset of the Russian sanctions program following the annexation of Crimea.

Just looking at this dataset through a rows and columns view or with a traditional BI tool wouldn’t have shown these spatial findings.

Step 4: They took action.

Enigma took the visualization they had created and made it accessible so that the public can stay informed about changes to sanctions programs. They now have access to sanctions data that is much more approachable and can better understand sanctioned entities in the context of their connections/relationships to the real world.

Sanctions are a huge part of our foreign policy. This data empowers those people working with the current administration to make better foreign policy decisions by showing how geopolitical events correlate with sanctions and how the actions of the current administration compare with those of previous administrations.

What could you do with old open data? CARTO encourages you to use Enigma’s Sanction Tracker as a prime example for the potential of taking your data and transforming it into real insight and outcomes. Enigma offers thousands of datasets, including this one, for free, so take a look and get started!

The 4 Types of Analytics Shaping Location Data Today

$
0
0

Lately location data has received a lot of press and attention– and rightly so. Organizations that utilize location data have shown it to be effective in enhancing marketing efficiency, reducing risk, improving medical research, and facilitating urbanplanning. Location data combines information from diverse sources in new ways to create knowledge, make better predictions, and tailor services.

But what exactly makes the spatial so special?

In an interview with Forbes, Javier de la Torre, CEO at CARTO, explained, “Eighty percent of data has a location component….But only about ten percent of organizations are really taking that location data to drive meaningful insights to help them optimize and make better decisions.”

A year later and organizations are just beginning to realize that data doesn’t heed artifical buondaries, everything is related to everything else, relationships change from place to place, and movement matters.

But the delay in getting on the location data train is understandable. In a short period of time, businesses have had to adapt and naviagate on an unfamiliar and slightly non-traditional commerce terrain. The integration of new digital infrastructure, like the Internet of Things, machine learning, artificial intelligence, big data, and predictive analytics, in consumer products has become ubiquitous and essential for all competitive businesses. A very foundational component to these various technological evolutions is location data. And fundamental to using location data for business optimization, and improving the decision making process, is applying analytic methods.

The 4 Types of Analytics

Location intelligence can be used in all four types of analytic methods.

However, like any famous foursome, the four types of analytics all have a reputation for performing distinct actions that yield different results depending on a given business challenge. Let’s take a look at how location data operates within each of the specific analytical models illustrated by the following site planning examples.

Descriptive: What happened?

Descriptive analytics creates a summary of historical data to yield useful information and possibly prepare data for further analysis. Using descriptive analytics you can calculate X using the number of Y in a given area.

Jet, an online retailer recently acquired by Walmart for $3 billion, assessed historical location data trends and found that a store that’s constantly changing and includes interactive events at select locations could signal the future of brick-and-mortar retail.

Even though data showed that thousands of mall-based stores closed around the U.S. in 2016, by the second quarter of 2017, Jet opened Fresh Story on Manhattan’s Upper West Side. Open for only six weeks, Jet’s larger goal is to raise awareness around its food delivery service as well as its more niche artisanal offerings.

Diagnostic: Why did it happen?

Diagnostic analytics is a deep examination of data to understand the causes of events and behaviors. It is characterized by the use of drill-down functionalities and correlations that use geographically weighted regression to determine the factors that cause X and use these factors as parameters for calculating Y.

J.C. Penney is experiencing severe financial losses, highlighting the downward spiral for many department stores around the world. The loss isn’t as bad as many retail analysts have estimated but sales at the retailer have fallen for the third consecutive quarter plunging the comapny’s stock to 42 percent in 2017.

After diagnosing the potential cause for the decline, J.C. Penney is working on modernizing its locations in an attempt to postpone the liquidation of 138 locations.

Predictive: What could happen in the future?

Predicitive analytics is the use of data, statistical algorithms, and maching learning techniques to identify the likelihood of future outcomes based on historical data. It is currently the most recognized type of analytic method used in business solutions. In a nutshell, it is the analysis of patterns and trends to predict the number of X in the coming year(s).

It is the Yoko Ono of the group.

It is estimated that Amazon has around 60 percent in market share of book sales through its website. So why does it need physical stores at all?

It is hard to discern the motivations for Amazon’s latest go at brick-and-mortar. It is clear that the retail titan has a lot of ideas about how physical retailing can be improved, ideas that come from its data-centric approach in online retailing.

Amazon forecasts that its physical stores will be an important way to introduce the public to new and unfamiliar devices. While the technologically savvy are comfortable purchasing artifical intelligence devices online, like the Echo, there are considerable chunks of the population that still need to see the new tech upclose first, which means books are probably not the main focus of Amazon’s demographically unique and consumer data-driven stores.

Prescriptive: What is the appropriate response to potential future events?

Prescriptive analytics uses optimization and simulation algorithms and is dedicated to finding the best course of action for a given situation. It is the powerful, but quiet, brother (i.e. a George Harrison type) of descriptive and predictive analytics. Using prescriptive analytics you can offer X at variable rate based on Y (determining Y from descriptive analytic methods) and focus Z efforts on low-risk areas where you can better the price of your competitors.

ShoppingCart-Prescriptive

El Corte Inglés, the largest Spanish department store chain and global leader in distribution, is piloting Situm’s indoor positioning technology to improve the shopping experience inside of its department store locations.

Relying on smartphone devices and without the need for beacons, El Corte Inglés hopes to reduce deployment, future maintenenace costs, and extend the location data technology to other stores in the future.

Bottom line: Location Intelligence

MoreQuestions-LocationIntelligence

Among the challenges location intelligence proposes to solve for retail, grocery stores, and other brick-and-mortar establishments, is the ability to unlock the power of geospatial data, enabling you to apply geographic contexts to business data. It allows for a deep study and analysis of collected data, the formation of descriptive, diagnostic, predictive, and perscriptive models, as well as optimal visual communication.

Location intelligence is not just another information system for geographers or data analysts., but a pwerful set of capabilities that can augment other business intelligence solutions.

Like a chameleon, location intelligence fits in different places. It can form part of the BI infrastructure by providing location data, take the shape of a BI-like application or provide tools to embed in other BI solutions.

Is your organization using these four types of analytics with its location data? Do decision-makers have the necessary resources to perform their own analysis or are they dependent on IT staff or external consultants? Read our Really Good Guide to Location Intelligence Implementation and download the templates to evaluate whether your organization is extracting value from its location data and start using location intelligence today!

Happy Location Data Mapping!

How Location Data is Helping Solve Water Insecurity

$
0
0

Nonprofits and humanitarian relief agencies are using location data to tackle some of the most challenging problems in the world today: from housing discrimination, to climate change, to disease prevention.

Water insecurity is one of the biggest challenges being addressed today by nonprofits, journalists, and technology companies, both at home and abroad.

Let’s take a look at how location intelligence helps transform location data into possible solutions for water insecurity problems.

Enriching water data through crowdsourcing and other external sources

How can nonprofits find data-driven solutions when location data on water resources are not readily accessible? The Regional Food Security Analysis Network RFSAN) and cMapIT confronted this problem while trying to improve water access in Syria and Nigeria respectively. But let’s see how both organizations employed Location Intelligence to surmount this obstacle.

The Regional Food Security Analysis Network (RFSAN)

The Regional Food Security Analysis Network (RFSAN) is a nonprofit based in Amman, Jordan using open data provided from several different humanitarian groups including iMMAP and USAID Food for Peace. RFSAN leverages these various data sources while locating solutions to:

  • Eradicate hunger, food insecurity, and malnutrition
  • Eliminate poverty to ensure economic and social progress
  • Encourage sustainable management and utilization of natural resources for future generations

As Syria’s civil war enters its sixth year, however, it is nearly impossible safely assess current conditions on the ground.

For its Water Resources map, RFSAN’s team of data analysts imported available datasets on both Syrian infrastructure and water resources gained from satellite imagery. In building this interactive map identifying the amount and location current water resources, RFSAN is helping raise situational awareness of the country’s ongoing humanitarian crisis while providing a much needed data resource for future relief work.

WaterDataNG

In response to a World Bank risk assessment on Nigerian water insecurity, cMapIT, a Grants for Good program recipient, attempted to provide transparency on water supply and consumption levels in Nigeria. Problematically, neither a technological infrastructure nor reliable data on water resources existed.

cMapIT, undaunted, enlisted help from both Nigerian citizens and water point operators first to collect water data that would allow the organization to build WaterDataNG. As a result, crowdsourced data visualizations showing water supply and consumption across Nigeria were made available, which also allowed cMapIT to lay the foundation for open data portal for Nigeria.

In response to a World Bank risk assessment on Nigerian water insecurity, cMapIT, a Grants for Good program recipient, attempted to provide transparency on water supply and consumption levels in Nigeria. However, neither a technological infrastructure nor reliable data on water resources existed.

cMapIT enlisted help from both Nigerian citizens and water point operators to collect water data that would allow the organization to build WaterDataNG. They created crowdsourced data visualizations showing water supply and consumption across Nigeria, which allowed cMapIT to lay the foundation for an open data portal for Nigeria.

Visualizing water contamination data.

Access to clean drinking water is not only a problem for countries in Africa and the Middle East.

Lead contamination is a serious problem across the United States. While Flint, Michigan’s lead contamination crisis made headlines in 2016, data journalists continue to identify higher than reported lead contamination in drinking water across the United States.

Journalists have built data visualizations showing school districts whose antiquated irrigation systems have contributed to increased lead levels in drinking water in cities like:

Houston, Texas

San Diego, California

New York, New York

In each visualization, journalists employed interactive features, like widgets, to encourage higher levels of engagement from local residents. In response to public outrage, the state of California began to offer free testing for lead in school drinking water, demonstrating the power of location data on driving change.

Interacting with near real-time data

Global Fishing Watch, an interactive map monitoring fishing vessels around the world, has taken data visualizations to the next level. This transparency tool is the result of collaboration among several tech companies, including Google, SkyTruth, Oceana, and Vizzuality, to reduce ocean pollution caused from illegal, unreported, and unregulated fishing.

In its demand for greater data transparency, public and private sector accountability, and citizen engagement, Global Fishing Watch provides anyone with an internet connection near real-time data on approximately 60,000 vessels on the waters at any given time.

By making fishing monitoring accessible in near real-time, the Global Fishing Watch has motivated world leaders, like Indonesia’s Minister of Fisheries and Marine Affairs Susi Pudjiastuti, to increase transparency on their countries fishing industry with more open data.

Conclusion

“Good leaders know that using and interpreting data is not only a search for insights,” writes Frank V. Cespedes and Amir Peleg in their recent Harvard Business Review article, it is “also about enlisting the hearts and minds of the people who must act on those insights.”

We couldn’t agree more.

If you’re looking for more information on ways to use location data for good, then don’t miss our upcoming webinar!

Learn more tips on working with location data for good at our upcoming webinar on June 22, 2017 at 1PM EST/ 7PM CET

Register Today!

4 Powerful Historical Maps Every Data Analyst Should Know

$
0
0

While early cartographers didn’t use the term “location intelligence,” they were keenly aware that their maps had an intended purpose.

Sometimes, that purpose was as straightforward as conveying the dimensions of an unfamiliar city. Other times, it was as controversial as exposing racial violence.

Whatever the case, these four historical maps reveal that innovations in a map’s perspective, data, and aesthetics all contributed to the complex five-century evolution of location intelligence.

Leonardo da Vinci, “Town plan of Imola,” 1502

Da Vinci Town Plan of Imola

In 1499, Cesare Borgia, a general from Rome, conquered the city of Imola in northern Italy. Now responsible for defending the unknown territory, Borgia commissioned Leonardo da Vinci to map it.

What da Vinci produced looks familiar because it uses the same bird’s-eye perspective as Google Maps (called ichnography), but this technique was neither obvious nor easy.

Before da Vinci, cartographers primarily used an “oblique perspective” that showed a city’s various elevations, but da Vinci rightly surmised that a flat point of view would better align with Borgia’s desires.

However, without aerial photos, da Vinci had to rely on meticulous measurements of every building, road, and plot of land. Though he slightly distorts a few areas for aesthetic reasons, his map is a fairly precise rendering of Imola and the oldest surviving record of ichnography.

Petrus Plancius, “The Molucca Islands,” 1594

Petrus Plancius The Molucca Islands

In the late 16th century, Portugal was a global maritime power with trade outposts on every continent except Australia, and such a commanding fleet required equally sophisticated maps.

Twenty-five years before Plancius published “The Molucca Islands,” Gerardus Mercator had solved the problem of representing Earth’s spherical shape on a flat surface. But, the technique required calculations that were too complex for most cartographers and wasn’t widely used.

Plancius’ map reveals that he understood not only the mathematics involved, but also the psychology of how to sell a new perspective.

By using rich colors, filling empty spaces with fantastical drawings, and reminding sailors of their commercial aspirations with illustrations of exotic spices, Plancius ensured that, of the many maps available at the time, his became the standard.

Matthew Fontaine Maury, “Whale Chart,” 1851

Maury Whale Chart

Before the invention of kerosene, the preferred way to produce light was whale oil. A single whaling expedition could net million of dollars in profit, and by mid-century, larger ships were traveling further distances to take advantage.

In regards to location intelligence, Maury’s map is important for two reasons. First, it’s an early example of crowdsourcing. Maury designed special logbooks to track weather, winds, currents, and water temperatures and distributed them to whale hunters. In exchange for their recordkeeping, they received the whale chart pictured above, which one sailor called “a precious jewel. . . sought for by all interested in whaling.”

Second, it introduced a new kind of location intelligence. Instead of conveying information about fixed points, it offered something much more complex and, therefore, valuable: a prediction of where mariners were most likely to succeed.

William Bunge, “Where Commuters Run Over Black Children on the Pointes-Downtown Track,” 1971

Where Commuters Run Over Black Children

Bunge’s map is a provocative and tragic culmination of the previous four. It uses a bird’s-eye perspective to give a holistic view of the chosen area, it reveals navigational routes and patterns (those of commuters in Detroit), and it relies on data collected from various sources, including local residents, newspapers, and police reports.

What Bunge’s map adds, though, is an implicit argument that transcends the data it depicts.

As a geography scholar commented, “Any Detroiter would have known that. . . this is a map of where white people, as they rush to and from work, run over black children. . . It is a map of racist infanticide, a racial child-murder map.” In that regard, Burge’s work doesn’t just depict the world. It helps to change it.

As the technology behind cartography evolves, so, too, does location intelligence. To keep ahead of the latest trends (and to learn more about cool maps like da Vinci’s ichnography), subscribe to our CARTO 5 newsletter.

3 Retailers Proving Brick and Mortar Isn't Dead

$
0
0

You’ve probably seen the headline a dozen times: “[big box store] closes [large number] of Stores Nationwide.”

The narrative of the retail crisis is common in today’s national media. Retail stores are dead and ecommerce is the way of the future, right?

Not exactly.

A growing and elite group of e-commerce companies have begun opening brick and mortar stores to extend their online brands, and this new generation of hybrid e-brick-and-mortar companies have distinct advantages over their legacy competition—including a wealth of location data about their customers.

Here’s how e-commerce companies are winning at the brick and mortar game:

  • Data-driven store openings. These retailers have precise information on their customers’ locations and combine that knowledge with other location data to determine where to open new retail stores—as well as spots to avoid and what stores might best be consolidated.
  • Geographically targeted products and marketing. Beyond knowing where their most loyal customers are, they also and what products perform best in specific geographic regions—and can easily target the appropriate regions and demographics with marketing campaigns customized to each of them.
  • Branding on a hyper-local level. Their physical storefronts can take their online brand to the next level with a personal boutique feel that reinforces their image with the target demographics they’ve situated themselves within.

So far, this formula is proving successful. Many of the retailers setting this trend are on the verge of becoming household names; others have long reigned supreme in the digital space and are using brick and mortar locations to expand their reach even further. And all of these retailers are leveraging brick and mortar stores as a means of gaining even more data about their customers.

Three examples of successful hybrid e-retailers include:

Amazon

Amazon Go

Not only has this purveyor of “everything” set up brick and mortar storefronts based on location data about consumer demographics, but they’ve also recently been awarded a patent on new technology to further corner the market.

When a shopper uses Amazon’s in-store WiFi, this new technology provides the retailer with location information about which department of the store the shopper is browsing, allowing the retailer to market directly to them in real-time. The patented tech also lets Amazon block customers—again, when searching through the location’s WiFi—from comparison shopping while in-store.

With their recently announced acquisition of Whole Foods, Amazon is looking to extend their reach even further.

Warby Parker

Warby Parker

After building a successful Netflix-of-eyewear online business model, Warby Parker now boasts more than 40 retail locations and a net worth of over $1 billion—and they’ve recently announced plans to open 25 retail locations in 2017.

The switch to brick and mortar is rooted in an early exception to their online model, when co-founder Neil Blumenthal allowed customers to try on eyewear in his apartment. Citing a positive experience for all, Blumenthal began to consider introducing boutique locations to Warby Parker’s strategy—and he now speculates that they may someday have as many as 1,000 storefronts.

Bonobos

Bonobos

This menswear retailer set themselves apart by focusing entirely on customer experience.

Starting by offering custom-fit pants bought online, a pair of dressing rooms in their headquarters spawned the idea of creating branded boutiques called Guideshops. Visiting a guideshop, customers receive a custom fitting by an experienced Bonobos “guide,” the ability to order clothing on-site, and even a free beer.

The rollout of this idea is made more strategic by nearly a decade of customer data and business analytics—and, of course, location intelligence.

Walmart took notice of Bonobos’ growth, announcing their $310 million acquisition last week on the same day as Amazon and Whole Foods.

Hybrid retail is the next big thing.

Retailers are using location intelligence to find competitors’ locations, determine demographic matches between their customers and local populations, uncover traffic patterns, and more—proving that hybrid retail is the next big thing.

And while some e-retailers might balk at adding the overhead associated with physical storefronts to an already robust online presence, consider the fact that more than 90 percent of all retail sales still occur in person.

Brick and mortar stores still hold certain advantages over the online customer experience—and most of the thousands of retail locations that have closed are artifacts of a landscape built before the proliferation of data.

The current trend of building the physical on top of the digital is a carefully executed strategy designed to win, based on location intelligence that helps retailers reduce costs, uncover market opportunities, and take their already successful brands to the next level.

The Reason Amazon Bought Whole Foods That No One Is Talking About

$
0
0

If Amazon’s planned $14 billion acquisition of Whole Foods goes through, it could radically change the grocery industry. Many news outlets have speculated why Amazon might be interested in Whole Foods, from solving grocery home delivery to getting a part of the $675 billion per year grocery business in the US.

To us, the reason is obvious: location, location, and location.

Amazon Gets Whole Foods’ Location Planning Process

To determine where to open new locations, Whole Foods employs a sophisticated calculus that is as valuable — if not more so — than the chain’s physical premises.

Colloquially called “Whole Paycheck,” Whole Foods has to target an affluent subset of customers for its offering of natural, organic, and artisanal goods, which can be almost twice as expensive as similar products.

According to Business Insider, their typical customer is a female between 25 and 39 who drives a Mercedes Benz, enjoys cooking, and — most importantly — has more than $1,000 in monthly discretionary spending.

What this means for Amazon is that Whole Foods has already done the work of finding where affluent customers are right now — and also where they may soon migrate to. For example, according to the Washingtonian, about 24,000 people work within a ten-minute walk of a store in Washington, D.C. scheduled to open by 2019. By then, the newspaper predicts that 3,700 more residents will be living in the area, “including several hundred in apartments directly above Whole Foods.” The writer concludes, “Whole Foods signals the same thing to other businesses that it does to Washington house-hunters: This neighborhood is on its way. Better get in now.”

Amazon Gets Whole Foods’ Customer Location Data

Along with inheriting Whole Foods’ demographic legwork, Amazon will also acquire their customers’ consumption patterns. Unlike most other products that Amazon sells, food has a short shelf life. It has to be transported, stored, and sold quickly, which makes it especially helpful to have data to help forecast what consumers are likely to buy.

In addition, food preferences in the U.S. are largely regional (the South, for example, consumes four times as many processed carrots as the West). With this new set of data, Amazon will now be able to analyze trends on the hyper-local level, calibrating supply and demand on a store-by-store basis and further eliminating waste.

Amazon Gets Optimally-Located Distribution Centers

Even if Amazon doesn’t sell a single organic pepper or free range chicken breast after the acquisition, their new brick-and-mortars solve one of the most significant tactical problems on their agenda: proximity to customers. The company has already experimented with unmanned aerial vehicle deliveries, pop-ups, and parachuting shipping labels.

In December of last year, the company filed a patent for a massive floating blimp (called an Air Fulfillment Center) that will store potential purchases that are then delivered to customers using tiny drones.

These massive aerial warehouses underscore the company’s desire to provide cheap, ultra-fast deliveries. In that regard, the Whole Foods locations, already situated in dense, high-income areas, have logistical — as well as retail — value. As Dennis Berman, the financial editor for The Wall Street Journaltweeted, “Amazon did not just buy Whole Foods grocery stores. It bought 431 upper-income, prime-location distribution nodes for everything it does.”

It was all about location.

Amazon rightly predicts that the same customer who’s willing to pay $4 for organic raw kombucha is also likely to buy other products from the internet giant, whether it’s a titanium coated knife set or a Prime membership.

However, Amazon’s acquisition will bear fruit the moment Whole Foods uploads its location data onto Amazon’s servers, revealing that the future of retail is as much about what’s in the cloud as what’s in the grocery basket.


A Map of Where People Went After the NYC Pride Parade

$
0
0

Each year, New York City celebrates Pride Month with a large procession down 5th Avenue, passing by the Stonewall Inn, the site of a police raid in 1969 that launched the gay liberation movement.

The event, which marks the important political, social, and historic impact of the gay liberation movement, also has a significant physical and spatial impact on the city. We wanted to find the post-pride parade hotspots to assess the effect the parade on local businesses and transit. (And also it’s just kind of cool data).

We took yellow NYC taxi pick-up and drop-off data from Sunday, June 26th, 2016 (NYC only releases taxi data monthly, so we’ll update this once we have 2017 data!), and compared it with data from the previous Sunday (June 19th) to get a somewhat like-for-like comparison.

We plotted taxi drop-off locations where:

  • the origin of the taxi ride was the pride parade area
  • the trip occurred between 4pm and 8pm (when the parade is typically winding down).

We then clustered these points using a commonly used spatial clustering algorithm called DBSCAN, which looks at the density of points within a specified region and groups points if they are sufficiently densely packed.

Because we are interested in finding the specific locations where people go, we set our search radius for the algorithm to be fairly small (10 meters or around 32 feet).

From here, we looked at where these clusters are and what is in the area. Our seasoned team of expert LGBTQ socialites handpicked some of the larger (and thus, more likely) events, businesses, and public venues where we see clusters and used CARTO’s walkshed function to calculate 1-min and 2-min walksheds around these areas to see if the clusters fall in these regions.

We can see:

  • People are primarily going to commercial regions (rather than residential ones).
  • There is a pretty large concentration around 14th street, at the very west of the city, where Pridefest was, a day-long street fair also part of the Pride festivities, in addition to being a prominent entrance to the Highline and places to go out, such as Le Bain at the Standard.
  • There are also large concentrations of drop-offs at several prominent gay bars including the Eagle, Phoenix, and Barracuda.
  • Concentrations of drop-offs near different hotels across the city, such as the W Hotel and the Continental. This makes sense, as many taxi-riders are likely to be visitors in the city.
  • There are also drop-offs near major transit hubs at Penn Station, Grand Central Terminal, the Best Bus Pick-up and the airport.
  • Lastly, the large cluster around the Jacob Javitts convention center is from a specialty foods convention that was happening at the center that weekend.

Where did people go in 2017? Check back in a few weeks to see what the data tells us. Happy Pride!

Cover Photo by Levi Saunders

80 Beautiful Data Visualizations Using Location Data and Maps

$
0
0

As the importance of location data continues to grow so do the ways you can visualize this information. We’ve scoured the web in search of data visualizations showing the value of location data in its many varieties, and have compiled this mega list to bring you the very best examples. The 80 entries below surprised us, taught us, inspired us, and drastically changed the way we understand location data.

We grouped these 80 data visualizations into thematic categories, and then listed each entry (click on the name of the visualization to open it). The six categories include:

From data visualizations on global breathing patterns, to fan reactions to the latest episode of Game of Thrones, to international diplomacy and humanitarian crises, these 80 data visualizations are only a small glimpse into the different ways location data is being used around the world.

Enjoy!

Conflict Zones

Conflict Zones

Reprojected Destruction
Hans Hack’sReprojected Destruction uses satellite imagery showing city-wide damage to buildings and infrastructure in Aleppo, Syria that he then projected onto figure-ground maps of Berlin and London. “The overall aim of the exercise,” as stated on the website, “is to help viewers imagine the extent of the destruction that might have been visited upon the UK and German capitals had these cities stood at the centre of Syria’s current conflict.” In using location data to relocate the destruction wrought by the Syrian civil war, Hack reminds us that data visualizations are not only beautiful, but powerful communication tools.

Conflict Urbanism: Colombia
The causes behind the unsustainable increase in global urban migration are many, but this data visualization shows how armed conflict has caused mass migration in Colombia from 1985 to 2015. Built with a recently released open dataset, Conflict Urbanism uses location data from displaced populations to chart routes from Colombia’s countryside to its urban centers. These routes can be enriched with municipal location data and population demographics provided from NASA satellite imagery and the Colombian National Department of Statistics respectively.

World Migration Map
With open data on worldwide net migration between 2010 and 2015 provided by the United Nations Population Division, Max Galka set out to visualize this large volume of data in one map. The result? An incredible resource charting migration flow patterns from origin-to-destination spanning five years. In visualizing this location data, the “World Migration Map” provides a transparency tool that can help fact check politicians flaming fears with heated rhetoric about walls and what not. You can read Galka’s full analysis here.

Missile Threat: CSIS Missile Defense Project
As relations among NATO member states cool, Russia continues to flex its militaristic might and increased geopolitical presence. In response, the Center for Strategic and International Studies’ (CSIS)Missile Defense Project built MissileThreat, an interactive data visualization providing a broad (but admittedly not exhaustive) overview of the A2AD situation in Europe. Location data was used to map military bases throughout the area, and then an area of influence analysis was applied to approximate the radius of areas at risk from different missile launch capacities.

Spies in the Sky
Peter Aldhous’ “Spies in the Sky” data visualization reveals flight track patterns of U. S. government’s airborne surveillance using aircraft location data provided by flightradar24. Individual aircraft flights are represented by animated dots while dense circles indicate regularly monitored areas. The data visualization color palette–red, white, and blue–reinforces Aldhous’s point, and perhaps explains why National Geographic ranked “Spies in the Sky” among its best maps of 2016.

Syria after four years of Mayhem
Before and after pictures can seem gimmicky, but Sergio Pecanha, Jeremy White, and K. K. Rebecca Lai reminded us of this genre’s effectiveness in “Syria After Four Years of Mayhem” (2015). Leveraging satellite imagery, location data from IHS Energy Data Information Navigator, and data from several humanitarian relief agencies, the authors show the devastation of the Syrian civil war by visualizing how in two years “the country is 83 percent darker at night than before the war.”

The Executive Abroad
Did you know that until Theodore Roosevelt, no sitting United States President traveled outside the country? This is just one of the many historical insights made accessible thanks to the University of Richmond’s Digital Scholarship Lab, and specifically its data visualization of every executive trip from Roosevelt to Obama. We especially love the customized basemap whose interactive compass makes use of location data’s temporal and spatial dimensions.

The Refugee Project
Hyperakt’sThe Refugee Project reminds us that art is a medium for political protest. This data visualization is both a resource that uses United Nations’ refugee data to enable comparative studies on refugee migration and a work of art that New York’s Museum of Modern Art selected for its “Design and Violence” exhibit.

The Shape of Slavery
Michelle Alexander’s The New Jim Crow, a major source for Ava DuVernay’s 13TH, identified Jim Crow legislation as the origins of the “school to prison pipeline.” In The Shape of Slavery, Bill Rankin and Matt Daniels distill the location component of historical data related to slavery and incarceration rates to provide visual proof that America, far from being a “post-racial” society, “is only recently a post-slavery one.”

United States Sanctions Tracker
Enigma’sSanctions Tracker, which monitors U. S. sanctions from 1994 to the present and updates each day, is one of the first resources to map this data despite the inherent spatial aspect of sanctions. The tracker uses time-series analysis to show the rise in sanctions across four different presidencies, and its interactive design allows viewers to click on animated dots to learn more about the specific type of offense being sanctioned.

White Collar Crime Risk Zones
The New Inquiry is bucking a data visualization trend that maps open data from police reports that tend to focus on “street crime” prevention. Instead, this data visualization uses machine learning to locate white collar crime risk zones (and provide some uncanny facial profiles!). Brian Clifton, Sam Lavigne, and Francis Tseng explain their methodology here, and we’re excited to enter this brave new world of data visualizatons!

Connectivity

Connectivity

A Day in the Life of an American
Nathan Yau over at Flowing Data takes a creative spin on location data in his simulation of movement patterns among Americans. With data from 1,000 people’s daily activities as reported in the United States Department of Labor’s American Time Use Survey, Yau’s simulation represents each person as an animated dot, whose color changes en route from one activity to the next over the course of twenty-four hours, to locate behavioral patterns. It’s meta. It’s beautiful. It’s a must see.

Connectivity Atlas
John Donne told us that “no man is an island entire of itself,” but what, exactly, connects everyone to “a piece of the continent”? Luckily, Connectivity Atlas has an answer with its data visualizations that shows how “[i]nfrastructure connects and defines us.” Built entirely with open data, this data visualization maps all the connective threads powering your day to day activities including telecommunication, transportation, and energy.

Global Diplomacy Index
The Lowy Institute for International Policy, a nonpartisan Australian think tank, created the Global Diplomacy Index to visualize diplomatic network. At the same time, this map exposes gaps in network coverage as well as high concentrations of diplomatic resources around the world. The map’s interactive design allows viewers to see the global reach of diplomacy at both the city and country level as serene blue lines are drawn across a basemap that reminding us all to keep a cool head.

Live Cyber Attacks
If 2016 taught us anything, it was the threat posed by cyber attacks. Norse provides threat attack intelligence, and its mesmerizing Cyber Attacks data visualization uses location data to show in real-time the origin and destination of security breaches.

Chicago’s Million Dollar Blocks
Millions of dollars are being invested in low-income neighborhoods across Chicago, but not to (re)invest in these neighborhoods. Instead, as Chicago’s Million Dollar Blocks project reveals, a war on neighborhoods is being waged as more money is spent policing low-income neighborhoods across the city. This data visualization is more than an expose of wasteful spending. In fact, Chicago is reimagined from the perspective neighborhoods whose low-income status is perpetuated by large infusions of tax dollars that fund disproportionate policing, which leads to higher incarceration rates despite declining crime levels.

National Broadband Map
What does your broadband connection say about you? Well, as the National Broadband Map demonstrates, a lot! It may not have as many interactive features as the Connectivity Atlas or Global Diplomacy Index, but it does use location data to identify gaps in broadband coverage across the country. In light of recent FCC rulings, this data visualization is an important reminder that the digital divide persists. Check it out today!

Every Active Satellite Orbiting Earth
David Yanofsky and Tim Fernholz provided some much needed “edutainment” in Every Active Satellite Orbiting Earth. This data visualization gives viewers a sense of the location and orbit perimeter of the 1,300 active satellites. Satellites, represented as individual bubbles whose color indicates its use, are compressed into a column indicating altitude position above the earth. Make sure to turn on the orbit feature to get a sense of each satellite’s orbit!

Twenty Years of India at Night
A picture may be worth a thousand words, but how many data points can a picture provide? Twenty Years of India at Night may have the answer! Using pictures from the Defense Meteorological Satellite Program that were taken each night between 1993 and 2013, researches extracted location data on light output from 600,000 villages and then mapped these points on the India Lights map. The time-series analysis feature shows both the volume of data collected and reveals the large rural areas still lacking access to electricity across India.

What Powers the World?
GoCompare’sWhat Powers the World? is an interactive visualization built with location data provided by the International Energy Agency displaying how reliant each nation is upon fossil fuel, nuclear, and renewable energy. What we love about this data visualization is its use of a dark matter basemap, a subtle use of color theory illuminating what really does keep the lights on around the world.

World’s Biggest Data Breaches
David McCandless, founder of information is beautiful, and Tom Evans created World’s Biggest Data Breaches, an interactive timeline of data leaks from 2005 to the present including interactive bubbles for each entry. Oh, by the way, each bubble represents breaches of at least 30,000 records and provides detailed information on leak type, industry in which leak occurred, and links to details report covering the breach. Locating data on data leaks has never been easier (…or scarier!).

Environmental

Environmental

Breathing Earth
John Nelson’sBreathing Earth used satellite images from NASA’s Visible Earth catalog to create an animated data visualization showing the earth’s pulse through one year’s seasonal transformation. The map was a huge hit, and has spawned many noteworthy followups including Nadieh Bremer’sA Breathing Earth (2016) and an entry included a little further down our list!

Cloudy Earth
We’ve looked at clouds from both sides, but NASA Earth Observatory has us beat with its visualization of cloud data between July 2002 and August 2015. Cloudy Earth attempts to visualize data on clouds, one of the least understood components of our climate, in order to study its role in global climate change. NASA’s Aqua satellite, and its MODIS sensor, provided imagery and location data for this visualization whose cool-blue color palette and time-lapse animation enables viewers to easily identify patches of high cloud density around the world.

Ecoregions 2017
RESOLVE’sEcoregions 2017 data visualization displays the earth’s 846 ecoregions in a stunning example of biogeography. The map contains a host of interactive features that not only use location data to identify where areas of biological diversity, but also to track global progress on Nature Needs Half’s commitment “to protect half of all the land on Earth as a living terrestrial biosphere.”

Eruptions, Earthquakes, and Emissions (E3)
The Smithsonian’s E3 data visualization is a time-lapse animation of volcanic eruptions, earthquakes, and carbon emissions around the world since 1960. Using data from its Global Volcanism Program, earthquake data from the United States Geological Survey, and data from the Deep Carbon Observatory, this map uses location data to better understand our environment.

Global Historical Emissions Map
Similar to the Smithsonian’s E3 data visualization, Aurélien Saussay’sGlobal Historical Emissions Map surveys environmental changes over time. However, this data visualization displays location data on fossil-fuel burning and gas flaring as well as cement production between 1750 and 2010. You can read more about Saussay’s methodological approach to mapping the industrial revolution’s historical impact as well as his decision to use a gridded dataset here.

GlobalView: Climate Change in Perspective
Data visualizations are a great way to tell a story, and that’s exactly what the editors at Bloomberg View do in GlobalView: Climate Change in Perspective. This story map works with location data related to climate change to present a clear, concise message about the urgency of this global crisis. Following the recent announcement about the Trump administration’s decision to leave the Paris Agreement, we need more of these types of data-driven stories.

The Lead Map
We’ve mentioned the pressing issue of water insecurity, which is why we’re thrilled to include SimpleWater’s latest data visualization. The Lead Map uses location data related to the ages of homes in a given county as well as the average corrosiveness of that state’s groundwater to predict the level of lead exposure in a neighborhood’s water. We love this data visualization not only for drawing attention to the pressing issue of access to clean drinking water, but also for its innovative use of different types of location data used for risk assessment. Check out your neighborhood’s lead exposure today!

London Atmospheric Emissions Inventory
Similar to the previous two entries, Parallel’s data visualization uses location data to map emissions across London, England. What’s different, however, are the 3D interactive features showing the levels of concentrated atmospheric emissions across the city.

Migrations in Motion
We’ve mentioned that climate change is contributing to increased urban migration, but how are animals reacting to these changes? This is the question that Dan Majka, member of The Nature Conservancy’s North America Region science team, set out to answer with Migrations in Motion, a data visualization charting the average migration routes for mammals, birds, and amphibians. Inspired by our next entry, this map distills location data on the migratory movement of nearly 3,000 different animal species into a macro-level view. Check it out today!

The Earth Wind Map
In 2013, Cameron Beccario created The Earth Wind Map, a data visualization showing global weather conditions as forecasted by supercomputers with updates every three hours. The project was originally inspired by Hint.fm’sWind Map, a data visualization of wind patterns that automatically updates based upon available weather data. The Earth Wind Map’s use of location data is nothing short of revolutionary, which you’ll discover by interacting with the data visualization. See what the same location data looks like using a stereographic projection! In the words of Florence and the Machine: “So big, so blue, so beautiful!”

Five Years of Drought
The widely-celebrated Five Years of Drought, John Nelson’s second appearance on our list, visualizes 285 weeks of drought data as reported by the United States Drought Monitor in a single view. Despite its static design, the results, as Nelson writes, was “a map that accidentally characterizes the movingness of droughts over five years by using opacity to represent motion.” A great example of the role perceptual color theory plays in spatial analysis and data visualizations, both static and interactive!

The True Size Of…
James Talmage and Damon Maneice created this app to dispel geographical misconceptions resulting from map projections, like Mercator, that distort the size and shape of land masses, and most notably the size of the African continent. Similar to Reprojected Destruction, we love this data visualization for its re-visualizations! Check it out!

What Is Missing?
Maya Lin’s What Is Missing? is a wake-up call for a world on the brink of the sixth mass extinction. Unlike other entries on this list whose primary aims also can show gaps in network coverage, this entry is entirely premised on using location data to visualize degradation and absences.

Treepedia
MIT’s Senseable City Lab’sTreepedia maps location data related to tree canopies for cities around the world including Paris, Frankfurt, and Cape Town. Instead of mapping each individual tree in each city, these data visualizations are built with an analysis method that uses location data to show the “amount of green perceived while walking down the street.”

Meteor Showers
How can data visualizations represent abstract concepts without distorting or reducing the data’s spatial design? That was the problem facing Ian Webster while working with meteor shower data collected by astronomer Peter Jenniskens in Meteor Showers. The solution? Create a 3D visualization providing viewers a 360 degree view of meteor showers moving through the solar system.

World Population Density
The accelerated rate of global urban migration is cause for alarm for elected officials tasked with providing city-wide services. But where, exactly, are these population increases happening? To answer this question, Duncan Smith over at CityGeographics built this World Population Density map with location data from the European Commission JRC and the Center for International Earth Science Information Network at Columbia University. What’s stunning about this data visualization is its ability to dispel a reductive urban-rural understanding of geography as dense pockets of human settlements are found beyond traditionally recognized urban centers around the world.

Sites Sounds and Smells

Sites, Sounds, and Smells of City Living

3D Model of New York City
CESIUM’s data visualization brings together features discussed in other entries–like mapping city-wide shadows and historical development–in one 3D model of New York City. This data visualization uses 3D Tiles to represent location data in a responsive manner leading the way to fulfilling Digital Earth vision.

50 Years of Concerts of The Rolling Stones
This data visualization commemorating the 50th anniversary of The Rolling Stones maps moonlight miles traveled while touring for half a century. The location data for this data visualization was extracted from Wikipedia, which we know can sometimes be like playing with fire, but luckily fans know their facts!

A New View of the Housing Boom and Bust
The Urban Institute’s Bing Bai and Taz George originally published A New View of the Housing Boom and Bust in September 2013, but this interactive data visualization continues to be updated each year with open data made possible by the Home Mortgage Disclosure Act. What we love about this data visualization is the annual time-lapse animation set against a static line graph enabling viewers both a macro and micro glimpse of the housing market’s travails.

Block by Black, Brooklyn’s Past and Present
Thomas Rhiel built this data visualization in 2013 with historical location data provided by New York City’s Department of City Planning to chart the uneven evolution of Brooklyn’s look and feel. More specifically, Rhiel plotted and shaded over 320,000 Brooklyn buildings according to construction year to see why certain areas of the city are more developed whereas some neighborhoods seem not to have been modified at all.

Breathing City: Manhattan’s at Work and Home Population by Hour
Inspired by John Nelson’s Breathing Earth, discussed above, Joey Cherdarchuk’s Breathing City visualizes Manhattan’s respiratory motion over a single day. This dot density map charts Manhattan’s population both at work (red dots) and at home (blue dots), which, as Cherdarchuk explains, was more harder than expected as obtaining the appropriate location data was difficult.

Count Love
Sometimes, less is more. We love Nathan Perkins and Tommy Leung’s understated visualization of location data related to resistance protests, titled Count Love. When asked about their inspiration, Tommy and Nathan said that they “created Count Love in the hopes of historically documenting protests related to civil rights, immigration, racial injustice, and other important societal issues across the United States.” In addition to Count Love’s interactive data map, check out the use of proportional bubbles scaled to the size of each demonstration on the statistics page!

Every Shot Kobe Bryant Ever Took
To commemorate former Los Angeles Laker Kobe Bryant’s final game, the Los Angeles Times created a data visualization featuring a custom basemap displaying a basketball court on top of which are 30,699 dots representing the location from which Bryant took every shot of his career. An innovative approach to indoor mapping to say the least!

Fans on the Move
Are you willing to travel internationally to attend your favorite band’s concert? Your favorite sports team’s big game? Ticketbis, an international subsidiary of StubHub, examined 36 months of location data on attendees purchasing international tickets through its service, and the results are interesting. Spoiler: the Superbowl and 2012 Summer Olympics rank pretty high, but check out which countries of origin are home to some of the world’s most diehard groupies!

How Music Taste Evolved: The Billboard Top 100 from 1958-2016
Matt Daniels over at The Pudding, visualized data for 22,000 songs ranked among Billboard’s Top 100 over nearly six decades. This time-lapse animation shows the top five songs each week while the audio plays clips from each number one song. Yes, this visualization uses data to locate cultural trends at a certain moment of time, but what really caught our attention was the audio component that scaled the length of time a number one hit played to the length of its time in the top spot!

Mapping the Shadows of New York City: Every Building, Every Block
Manhattanhenge is great, but for the rest of the year New Yorkers take access to sunlight very, very seriously. In “The Struggle for Light and Air in America’s Largest City” (2016), Quoctrung Bui and Jeremy White built a data visualization of New York City that maps building shadows. Using location data on Manhattan buildings, Bui and White used ray tracing to simulate the effect of sunlight on each building and its surrounding area. The results are stunning as “dark” neighborhoods in the shadows of nearby skyscrapers are easily spotted. Location data can cast a long shadow it turns out!

Musical Map of the World
Eliot Van Buskirk, Data Storyteller at Spotify, built this data visualization using location data extracted from customers’ streaming preferences. As such, Musical Map of the World curates “distinctive playlists” each week for cities around the world featuring that city’s top 100 streamed songs. Map viewers become map listeners with this data visualization as each dot can stream that city’s playlist. Check it out!

Netherlands Building
Inspired by Thomas Rhiel’s data visualization mentioned above, Bert Spaan and the Waag Society created this data visualization representing all 9,866,539 building in the Netherlands. The qualitative color scheme shades each buildings by construction year, and the use of a dark matters basemap adds a contrast that catches the eye. The Waag Society, in fact, has been selling reproductions of this beautiful visualization of location data!

Population.io
The World Data Lab’sPopulation.io may be both the most comprehensive and informative visualization of location data on our list. One of our favorite interactive features is the visualization of demographic data based upon a map viewer’s date of birth, a neat way to show how in your own life span the world’s population has increased. Another interesting feature is the interactive map that estimates the remainder of your life expectancy based upon current location that can be compared to other countries around the world.

Smellmap: Amsterdam
Entries on this list so far have used location data to map a given city’s visual sites and audible sounds, but our next entry takes data visualizations to a whole new level: visualizing a city’s smell. Kate McLean, artist and designer working on urban smellscapes, created Smellmap: Amsterdam, a sensory map whose animated dots indicate over 50 smell types whose wafting radius is represented by concentric circles.

Spain in Figures
We’ve mentioned in the past how open data enhances transparency around a local government’s smart city projects, and Spain in Figures is a great example of what that means. This visualization of location data across Spain provides proof of changes across the country over the last four years. As an open sourced tool, moreover, this data visualization encourages local residents to contribute data on their municipality, a great method to hold elected official accountable.

The Geographic Divide of Oscar Films
Inspired by Josh Katz’s cultural divide maps, Matt Daniels, Ilia Blinderman, and Russell Goldenberg over at The Pudding decided see if cultural and geographical divides corresponded in relation to 2017 Oscar-nominated films. The maps are gorgeous, the methodology rigorous, and the widespread popularity of Arrival undeniable!

Underworlds
How can location data from your city’s wastewater system help public health officials better understand urban epidemiology in near real-time? This is the question behind Underworlds, which is the second entry on our list from MIT’s Senseable Lab. We love this data visualization for providing a twenty-first century take on John Snow’s cholera map, and look forward to seeing whether this project can improve a city’s health one neighborhood at a time.

Ungentry
Ungentry, a Code for America Brigade project, wanted to know if Beantown would follow a similar pattern of gentrification as that of San Francisco, California and New York, New York. This data visualization uses a choropleth map to highlight changes in data for each Boston neighborhood between 1990 and 2010 to determine a baseline for gentrification, which will enable further analysis helping to identify factors contributing to this change in city demographics.

Why Measles May Just Be Getting Started
Keith Collins, Adam Pearce, and Drew Armstrong’s Why Measles May Just Be Getting Started is a great example of data journalism, and one of our favorite visualizations of location data without a geographical basemap! Instead of plotting the geographic coordinates of an outbreak of measles, this entry visualizes each state as a post-it note whose size proportionally corresponds to the number of reported outbreaks.

Social

Social Media

Gay Happiness Index
PlanetRomeo, Europe’s leading gay social network, created the Gay Happiness Index using location data retrieved from their online dating app. Based on data from over 115,000 users, this data visualization provides a happiness score for each country that is then ranked to determine the best place for gay men to date. Find out how your country scored below, and scroll down to learn some interesting facts too!

How Every #GameOfThrones” Episode Has Been Discussed on Twitter
For a twist on location data, check out Krist Wongsuphasawat’sinteractive How Every #GameOfThrones” Episode Has Been Discussed on Twitter data visualization. Using social media data from Twitter, this data visualization forgoes geographical location and instead locates thematic interest in each new episode of HBO’s Game of Thrones. More specifically, this data visualization depicts statistics on fan reactions shared on Twitter in the twenty-four period following each episode’s premiere. Find out when, exactly, winter arrives using this website! #Longmayshereign

Inequaligram: Measuring Social Media Inequality
You’ve probably seen a lot of public Instagram images of midtown Manhattan shared by both tourists and residents, but what about Fort Tryon Park? The answer is likely “No,” and the reason for this disparity, finds the team behind Inequaligram: Measuring Social Media Inequality, relates to economic inequality across New York City. These dot density data visualizations were built with location data extracted from 7,442,454 public Instagram images shared by visitors to and residents of Manhattan, and you’ll note that the volume of images for both visitors and locals drastically trails off after 110th street.

Sunrise around the World
Are you a morning person? Well, as this visualization of location data extracted from global tweets demonstrates, you’re not alone! This time-lapsed dot density map shows geotagged tweets containing “sunrise” in different languages from around the world on April 6, 2014. Make sure to zoom in to see just how many Twitter users tweeted about the rising sun!

The Food Capitals of Instagram
We know foodies love posting images of their meals on Instagram, but what location data can the images themselves provide? The Food Capitals of Instagram adds a twist to social media data visualizations in mapping not restaurant locations but rather the geograpical orgins from which the food served orginates. The visual’s location data was extracted from more than 100,000 photos posted on Instagram between March 10 and March 15, 2015, and bubbles are sized in proportion to the volume of photos.

The Louvre on Instagram
This entry is “meta” to say the least. Tin Fisher’s created a data visualization featuring a basemap representing the floorplan of the Louvre, one of the world’s premiere art museums located in Paris, France, and mapped Instagram images of the images on display in the Louvre. Fisher downloaded geotagged photographs of Louvre images from 2014 using the Instagram API, and mapped each image as a data point according to its actual positioning within the Louvre. It wasn’t too surprising to see the high volume of foot traffic around the Mona Lisa. It was surprising, however, to see how many people looked at art pieces through a screen at the Louvre!

Twitter Tongues
The wealth of location data provided from social media platforms is staggering, and in this data visualization James Cheshire mapped 3.3 million geo-located tweets collected by Ed Manley featuring different languages found across the city of London during summer 2012. That London hosted the 2012 Summer Olympics accounts for the high density of dots of different languages found in and around Olympic Park. Check it out (and make sure to see the difference the basemap slider makes!)

Locals & Tourists
We were impressed with the previous entry’s visualization of 3.3 million data points, but a year later Eric Fischer mapped 3 billion tweets in Locals & Tourists. This data visualization breaks down the tourist-local divide by mapping social data provided by GNIP. Learn more about what went into processing this high volume of geospatial data here.

Wikipedia Recent Changes Map
Stephen LaPorte and Mahmoud Hashemi’s data visualization tracks global updates to Wikipedia made by unregistered users. Although this population only amounts for approximately 15 percent of total Wikipedia updates, it is pretty cool to see how the LaPorte and Hashemi used IP addresses to extract the geograhical location of unregistered users.

Wikiverse
Have you ever gone down a wikipedia rabbit hole? Well, imagine if that were a black hole and you’d begin approximating Wikiverse, a self-proclaimed “galactic reimagining of the wikipedia universe.” What we love about this data visualization is its treatment of location data wherein spatial “proximity” is rendered as semantic “similarity,” a move reminiscent of Tobler’s first law of geography. Check it out today!

Visualizing Global Blog Activity
If you work in marketing, then you’ve probably heard the phrase “content is king.” And you’ve probably been warned about the volume of quality content being produced around the internet too. Well, Twingly’s data visualization offers quantitative proof of at least the amount of blogging around the world in near real-time. Make sure to turn on the sidebar showing each post’s language to get a better sense of what all those expanding lines really mean!

Transportation

Transportation

A Tale of Twenty-Two Million CitiBikes
In his almost Dickensian break down of location data extracted by CitiBike riders, Todd Schneider details the hidden story behind 22.2 million Citi Bike rides across New York City. We love the time-lapse animation tracking the route of each and every bike in use over the course of a day. Make sure to check out what rush hour looks like for bikers starting around 5:30pm for a whole new take on “it was the best of times, it was the worst of times.”

Average Commute Times
The grass may seem greener on the other side, but is the commute time shorter too? Thanks to this data visualization you can now dispel the belief that other commuters have it easier using AutoAccessoriesGarage’s interactive data map. Built with open data on commuting from the United States Census Bureau, this data visualization allows viewers to check their average daily commute against zip code and state average in a beautiful choropleth map. Check your commute score today!

Every A380 Route
The Airbus A380, the world’s largest airplane carrier, has fallen out of favor as airlines attempt to cut costs. Today, as David Yanofsky reports, only 13 out of 57 airlines even fly the A380! Read more of Yanofsky’s report here, and check out the rotating globe visualizing A380 flight routes using location data from PlaneStats and OpenFlights.

Glasgow in Motion
Glasgow is Scotland’s largest city, but with Glasgow in Motion you can glimpse the pulse of this thriving metropolis in real time. The Urban Big Data Centre has created more than a transportation app with real-time alerts as this location app even visualizes the city’s air quality using data from World Weather Online. Check it out today!

The Ship Map
Kiln’sThe Ship Map, winner of a 2016 Information is Beautiful award, displays the 2012 movement of the global merchant fleet using location data provided by Julia Schaumneier and Tristan Smith of the UCL Energy Institute. By cross referencing location data on each ship’s geographical coordinates and speed with other databases, this data visualization determines characteristics of each ship in order to calculate the hourly rate of carbon emissions. This is a great example of using accessible location data to estimate missing variables!

Hubway
The second entry on our list from Nathan Perkins and Tommy Leung’s of Count Love, Hubway charts over five million hubway trips taken in Boston to measure station traffic in order to identify outliers that are being underutilized. We loved this cyclist take on route optimization!

In Flight
Kiln created this data visualization for The Guardian using data provided by FlightStats. This data visualization tracks individual flights in near real-time as well as flight routes identifying areas with high traffic volume each and every day. Watch the video before exploring the map for yourself!

New Europe
Benedikt Groß, Philipp Schmitt, and Raphael Reimann over at Moovel Lab set out to determine whether all roads do, in fact, lead to Rome. Similar to other entries on our list, New Europe undertook a route optimizaton”, but on a whole new level as they determined which of the nearly 500,000 roads leading to Rome was the best option. These data visualizations definitely weren’t built in a day, but we’re grateful for all the time and energy that went into mapping the routes from each of the 486,713 starting points! Learn more about the project here.

Night Lights Map
We’ve been told the night is dark and full of terror, but luckily there’s a Night Lights Map for that. NASA’s Earth Observatory created this dazzling data visualization, which is “the clearest yet composite view of the patterns of human settlements across our planet,” using location data extracted from satellite imagery. In addition beautiful images, this data visualization will help researchers investigate how cities around the world expand in the coming years in response to global urban migration.

Sensing Vehicle: The Car As An Ambient Sensing Platform
There’s been a constant buzz around driverless vehicles for some time, but did you know that our current fleet of cars contain upward of 4,000 sensors already? Sensing Vehicle, the third and final entry on our list from MIT’s Senseable City Lab’s, provides one of the first interactive car model maps that locate the sensors making our cars smarter and smarter each day.

The Megaregions of the U.S.
Based upon their research related to economic geography across the United States, Garrett Dash Nelson and Alasdair Rae visualized this work in The Megaregions of the U.S.. This data visualization participates in growing efforts to map beyond static boundaries and instead see what the U.S. “might look like if we based our regions on the pattern which commuters weave every day between cities, suburbs, and rural areas.” With more than four million lines, this data visualization weaves together beautiful digital cartography and innovative spatial analysis. A must see!

Travel Time to the Closest Primary Airport in the U.S.
City-Data built Travel Time to the Closest Primary Airport in the U.S.. This data visualization calculates commute times with the help of machine learning, specifically Open Source Routing Machine (OSRM), that is applied both to public and private data to calculate spatial distance and your optimized route to and from the airport. Check it out!

Visualizing 24 Hours of Subway Activity in New York City
Will Geary takes a different approach to visualizing a day in the life of New York City’s subway system by forgoing a static basemap altogether. Instead, Visualizing 24 Hours of Subway Activity in New York City displays a day’s worth of location data related to train travel beyond the customary NYC Transit Map. Geary’s data visualization provides a brand new way of seeing how the subway runs in the city that never sleeps.

We hope you found these data visualizations as beautiful as we did. Let us know about any projects we missed on Twitter, Facebook, and LinkedIn!

Announcing Microsoft & CARTO Strategic Partnership

$
0
0

Today, we are excited to announce a strategic parnership between Microsoft and CARTO to bring cutting-edge Location Intelligence (LI) solutions to the enterprise world.

We are turning to Microsoft Azure cloud technology to power our global Basemaps and Location Data Services offering.

Here at CARTO, this marks another important milestone in our company’s journey. Microsoft’s enterprise reliability and reputation will strengthen our Basemaps and LDS solutions, which are already being used by market-leading Business Intelligence and Data Analytics companies such as Alteryx, Qlik, Pitney Bowes, and many others.

Currently, CARTO serves more than 100 million mapviews per month globally.

Microsoft and CARTO are strong believers in the power of location data and Location Intelligence to deliver tangible business outcomes and this partnership is an important step in enabling access to LI tools in the enterprise world. CARTO has achieved co-sell status with Microsoft, enabling Microsoft’s salesforce to offer CARTO Builder and CARTO Engine products on top of Azure Technologies to their corporate customers, driving their delivery of innovative solutions in the Indoor Analytics and Retail optimization space.

 

There is great potential for every single company to incorporate Location Intelligence & Location Data analysis solutions to help them gain a better understanding about the market landscape, and their customers and prospects behavioural patterns. The partnership with CARTO will allow us to enhance this type of offering to our clients through these Azure-based innovative solutions. Pilar López, CEO at Microsoft Ibérica

 

Our mission is to democratize Location Intelligence by putting it at the center of our customers' analytics strategy. Together with Microsoft, we will enable everyone to find insights on their location data, making it faster, easier, and safer than ever to deploy location analytics across entire corporations. Sergio Álvarez, CPO and Cofounder at CARTO

As part of the announcement, CARTO’s offering will be available in Azure’s Marketplace to empower data analysts, developers, and data scientists to more easily consume location data and make informed business decisions.

The availability of CARTO Engine and CARTO Builder in the Azure Marketplace provides a single, unified platform to serve both CARTO and Microsoft users looking to purchase and deploy location intelligence within a few clicks, simplifying the installation, and configuration.

Would you like to learn more about the power of this partnership? Contact us and we’ll show you how location data plays a significant role in your business decisions.

How to Use Spatial Analysis In Your Site Planning Process

$
0
0

If you’ve ever been part of a new development project for a store or building, you know how many factors go into making decisions about physical location.

Whether you’re opening new stores, consolidating stores, or deciding on an optimal location for a distribution site, understanding the surrounding context of a site is vital to success.

Retailers, restaurant chains, hotels, and a new wave of brick-and-mortar industries have to make site planning decisions and they have to get them right the first time. These businesses are turning to location intelligence and advanced spatial analysis techniques to make sure they have everything they need to make the best decision possible.

Is spatial analysis a core part of your site planning process?

In this post, we’ll explore several ways that visualizing and analyzing location data can benefit your business in today’s hyper-competitive landscape.

Enrich your visualization with demographic and lifestyle data to uncover potential markets.

Most of us are familiar with the benefits of leveraging data—in marketing analytics or business intelligence, for example. By adding new sources of data to your existing datasets, you can gain strategic and cost-saving insights that help create a better picture for your site planning.

You’ll understand the potential customer base of an area—not just how many people are there, but whether they are your ideal customers.

In the example at the top of the page, a leading pharmacy retail chain wanted to know the best locations for 5 new pharmacies they were planning on opening in the Los Angeles metro area. We mapped their current pharmacy locations and then enriched the data (from our Data Observatory) with census tract polygons, demographic data, median household income, and car ownership data.

Using the centroid of geometries spatial analysis we are able to find the 5 best locations for new sites, based on a variety of factors. The location intelligence application dynamically adjusts as the retail chain adjusts the ideal median household income and customers driving data on the right hand side (on the backend), providing the number of “Seniors Served” who fall within the 8 mile radius of each location and the “Potential Seniors Served” which show the total number of seniors within the map.

The pharmacy retailer can now take this visualization to the next level with their own location data, validating recommendations from this platform with their own customer transaction data from the Los Angeles region.

Combine location data with online transaction data to determine where to convert stores into distribution centers, or open new distribution centers.

A retail company provided us with 16 retail locations and 1700 online transactions in the Los Angeles region. With an increased demand for delivery from online orders, they wanted to convert 3 of their stores into online distribution centers. The distribution centers needed to be spaced to cover a wide area to serve the most customers.

The spatial analysis above shows the 3 optimal locations for distribution based on location of online sales and existing store locations. The shaded regions represent a 15 minute driving radius around the distribution centers, giving us insight into how many customers we can serve.

Distribution

This US-based e-commerce company (we’ve provided an image only to protect the customer’s data) wanted to expand into the European market. They wanted to know, based on their current transaction data, where the best place would be to open additional distribution centers in Europe.

This application shows, based on transactions from their top 5 european countries (aggregated to protect privacy), the optimal 2 locations for distribution centers (Near Frankfurt and London). Obviously, the company will take into account other factors in their decision-making process, but this application provides a good starting point for exploring different potential sites.

Use mobility, transportation, and routing data to understand where the likelihood your customers will travel to a specific site or area.

Sanitas, a healthcare provider in Spain, uses CARTO to gain insights related to their customer’s location. They are able to identify areas which a higher number of potential customers by combining demographic data from the Data Observatory with traffic, mobility and routing data.

If you’re only looking at zipcode or geographic segmentation, you’re missing vital context about how your customers move in relation to where they live and work. For example, Sanitas was able to see how crossing a railway track or a specific road with heavy traffic was a big factor in customers avoiding a certain site.

No one can predict the future with absolute certainty, but the more data—and more kinds of data—you include in your site planning, the more likely you are to make decisions that help your business compete, grow, and fill a genuine need in the marketplace and the community.

If you’re interested in learning more about how location intelligence can benefit your retail company or business, reach out today. We’d love to hear about your challenges and find ways to make you more competitive in the location that’s best for you. Request a demo or visit CARTO.com to find out more about our location intelligence solutions.

Map of the Month: Landmine Removal in Nagorno-Karabakh

$
0
0

Nagorno-Karabakh is a landlocked disputed territory in the South Caucasus, internationally recognized as part of Azerbaijan. As a result of the collapse of the Soviet Union, the region became locked in conflict, which intensified into all-out war from 1992 to 1994.

Landmines were laid across large swathes of land by both Armenian and Azerbaijani forces and cluster bombs were dropped extensively by the Azerbaijani Air Force. Around 20,000 people died and hundreds of thousands were displaced. A ceasefire was agreed in 1994. It held for over two decades but the lack of a formal end to the war left the Armenian population of Nagorno-Karabakh isolated. Fighting broke out again in April 2016.

The HALO Trust (HALO) is a UK-based NGO, leading the effort to protect lives and restore livelihoods threatened by landmines and all the deadly debris of war. HALO, together with its donors, has been clearing landmines and unexploded ordnance (UXO) in Nagorno-Karabakh since 2000.

Nagorno-Karabakh is a remote and beautiful place, with a unique identity, but its people have been haunted by landmines for more than two decades. In fact, the Karabakh has one of the highest per capita incidence of landmine accidents in the world, which is on par with Afghanistan – a third of the victims are children.

HALO has been using CARTO to visualize and summarize landmine, UXO, and accident data. HALO can quickly map, display, and analyze data dynamically using the Esri ArcGIS Server connector, which allows for near real-time data updates every hour. Using CARTO technology, HALO is able to visually demonstrate to donors the impact it is achieving through its mine clearance and UXO programmes in 16 different countries around the world.

To learn more or to donate, please visit: http://www.halotrust.org.

Every month, CARTO highlights a Map of the Month that uses location data to tell a story in a clear, compelling, and visually stunning way. We love hearing your suggestions! If you have a map you think would be a good Map of the Month, please submit it here.

Viewing all 820 articles
Browse latest View live