Advertisements

Posts Tagged ‘sig’

Creating value through Open Data

2016/02/19

The benefits of Open Data are diverse and range from improved efficiency of public administrations, economic growth in the private sector to wider social welfare

(Source: http://www.europeandataportal.eu/)

Performance can be enhanced by Open Data and contribute to improving the efficiency of public services. Greater efficiency in processes and delivery of public services can be achieved thanks to cross-sector sharing of data, which can for example provide an overview of unnecessary spending.

The economy can benefit from an easier access to information, content and knowledge in turn contributing to the development of innovative services and the creation of new business models.

Social welfare can be improved as society benefits from information that is more transparent and accessible. Open Data enhances collaboration, participation and social innovation.

clip_image004

The economy can benefit from easier access to information, content and knowledge in turn contributing to the development of innovative services and the creation of new business models.

For 2016, the direct market size of Open Data is expected to be 55.3 bn EUR for the EU 28+. Between 2016 and 2020, the market size increases by 36.9%, to a value of 75.7 bn EUR in 2020, including inflation corrections. For the period 2016-2020, the cumulative direct market size is estimated at 325 bn EUR.

picture_1

New jobs are created through the stimulation of the economy and a higher demand for personnel with the skills to work with data. In 2016, there will be 75,000 Open Data jobs within the EU 28+ private sector. By 2020, this number will increase to just under100,000 Open Data jobs. Creating almost 25,000 new direct Open Data jobs by 2020.

picture_2

Public sector performance can be enhanced by Open Data. Greater efficiency in processes and delivery of public services can be achieved thanks to cross-sector sharing of data, providing faster access to information. The accumulated cost savings for the EU28+ in 2020 are forecasted to equal 1.7 bn EUR.

picture_3

Open Data results in efficiency gains as real-time data is used that enables easy access to information that improves individual decision-making. Three case studies are assess in more detail: how Open Data can save lives, how it can be used to save time and how Open Data helps achieve environmental benefits. For example, Open Data has the potential of saving 7000 lives a year by providing resuscitation earlier. Furthermore, applying Open Data in traffic can save 629 million hours of unnecessary waiting time on the roads in the EU.

picture_4

Economic benefits are primarily derived from the re-use of Open Data. Value is there. The question is how big?

The European Union has adopted legislation to foster the re-use of Open (Government) Data. The expected impact of this legislation combined with the development of data portals, is to drive economic benefits and further transparency. Economic benefits are primarily derived from the re-use of Open Data. Value is there. The question is how big?

Thus, the European Commission, within the context of the launch of the European Data Portal, wished to obtain further evidence of the quantitative impact of re-use of Public Data Resources. A study was carried out with the aim to collect, assess and aggregate all economic evidence to forecast the benefits of the re-use of Open Data for all 28 European Member States and the ETFA countries, further referred to as EU 28+, for the period 2016-2020.

Direct benefits are monetised benefits that are realised in market transactions in the form of revenues and Gross Value Added (GVA), the number of jobs involved in producing a service or product, and cost savings. Indirect economic benefits are i.e. new goods and services, time savings for users of applications using Open Data, knowledge economy growth, increased efficiency in public services and growth of related markets.

The market volume exhibits the totality of the realised sales volume of a specific market; the value added. A distinction can be made between the direct market size and the indirect market size. Together they form the total market size for Open Data. For 2016, the direct market size of Open Data is expected to be 55.3 bn EUR for the EU 28+. Between 2016 and 2020, the market size is expected to increase by 36.9%, to a value of 75.7 bn EUR in 2020, including inflation corrections. For the period 2016-2020, the cumulative direct market size is estimated at 325 bn EUR.

In 2016, there will be 75,000 Open Data jobs within the EU 28+ private sector. By 2020, this number is forecasted to increase to just under 100,000 Open Data jobs. This represents a 32% growth over a 5-year period. Thus, in the period 2016-2020, almost 25,000 new direct Open Data jobs will be created.

Based on the forecasted EU28+ GDP for 2020, whilst taking into account the countries’ respective government expenditure averages, the cost savings per country can be calculated. The accumulated cost savings for the EU28+ in 2020 are forecasted to equal 1.7 bn EUR.

The aim of efficiency is to improve resource allocation so that waste is minimized and the outcome value is maximised, given the same amount of resources. Open Data can help in achieving such efficiency, The study offers a combination of the insights around the efficiency gains of Open Data and real-life examples. Three exemplar indicators are assessed in more detail: how Open Data can save lives, how it can be used to save time and how Open Data helps achieve environmental benefits. For example, Open Data has the potential of saving 1,425 lives a year (i.e. 5,5% of the European road fatalities). Furthermore, applying Open Data in traffic can save 629 million hours of unnecessary waiting time on the road in the EU.

The majority of studies performed previously are ex-ante estimations. These are mostly established on the basis of surveys or indirect research and provide for a wide range of different calculations. No comprehensive and detailed ex-post evaluations of the materialised costs and benefits of Open Data are available. Now that governments have defined Open Data policies, the success of these initiatives should be measured. The study offers several recommendations for doing so.

The report goes into further detail on how Open Data has gained importance in the last several years. Furthermore, the report provides insight into how Open Data can be used, and how this re-use differs around Europe. These insights are used to develop a methodology for measuring the value created by Open Data. The resulting values are presented in a graphical way, providing insight in the potential of Open Data for the EU28+ up to 2020.

 

(Source: http://www.europeandataportal.eu/)

 

Advertisements

Descargas del CNIG. Open Source bien hecho!

2016/02/08

Hola amigos del GIS,
Por motivos de trabajo que no vienen al caso, he tenido que bucear de manera sistemática la web de descargas del CNIG. http://centrodedescargas.cnig.es/CentroDescargas/inicio.do
Una maravilla.

cnig-20160208-01

Por motivos que tampoco viene al caso, he de hacer esto mismo de vez en cuando en todos los Institutos cartográficos del mundo y el del CNIG es sin duda en el que me resulta más fácil, en el que el modelo de datos en más lógico y en el que los links son más fiables de todo el mundo. La única obligación es la atribución obligatoria de los datos. ¿No es mucho pedir, no? Desde el día 27 de diciembre, los datos del IGN son libres CC By 4.0.
https://creativecommons.org/licenses/by/4.0/

Por tanto es obligatorio que mencione la procedencia a pie de imagen, créditos, etc.., sobre todo en publicaciones, usos comerciales, artículos, etc… (Por ejemplo puede poner “<tal dato> CC by instituto Geográfico Nacional” o más bien “derivado de <tal dato” CC by ign.es” o similares…).

cnig-20160208-02

Ya sea porque necesitemos las imágenes del PNOA (Plan Nacional de Ortofotografía Aérea), un modelo digital del terreno de alta resolución o imágenes históricas de nuestro pueblo… tan solo hay que bucear un poco en el catálogo de geodatos del Instituto Geográfico Nacional (Centro Nacional de Información Geográfica) y los conseguiremos.

Por ejemplo, la semana pasada tuve que encontrar datos sobre algunas ciudades españolas para hacer varios escenarios 3D para un cliente y aquí encontré por un lado un DSM 5m elaborado con fuentes LIDAR, por otro lado me bajé de Cartociudad los datos relativos a vectores lineales, manzanas y luego desde la web de CATASTRO (https://www.sedecatastro.gob.es/OVCFrames.aspx?TIPO=TIT&a=masiv) me bajé las geometrías de todos los edificios de la ciudad (que planeo geoprocesar para eliminar las formas no deseadas y para adjudicar alturas precisas gracias al LIDAR bajado con anterioridad).

Por qué no añadir geometrías de Open Street Maps (https://www.openstreetmap.org/export) o de la propia Base Topográfica Nacional BTN25 para completar dicho escenario?

barcelona-bldg-osm-capture-20160112
MADRID-GISDATA

La verdad amigos es que desde que empezó a funcionar el Open Data, los Geógrafos y derivados tenemos mucho con lo que ‘jugar’ para hacer nuestros análisis.
http://idee.es/

Espero que os resulte interesante.

Un saludo cordial,

Alberto
Geógrafo/ Máster SIG UAH/ Diseñador Multimedia

Cheers! Geovisualization.net has just been released!

2016/02/03

Risk exposure. Geoprocessing using Open Source Data!! Next steps!!

2016/01/29

Now that we have completed a first example, let’s continue with a real-world one. Its important working on a Data Model to define what we understand as a Risk and how important this is. Meaning. High voltage power lines are an actual risk but the closer we are, i guess the bigger the risk is, meaning i.e 3 if we are within 50m and 1 if we are 150m away… It’s only a guess.

risk3

Same thing related to antennas, Petrol stations, etc.

This is my Data Model defined over the city of Madrid, Spain.

1 LINES- Roads speed >50 km/h within 100m risk=3
2 LINES- Power lines within 100m risk=3

3 POINTS-

Antenna,
High voltage towers,
Petrol stations:

risk if within 50m=3; risk if within 100m=2; risk if within 150m=1; 

4 AREAS-

Cement factories,
Electric Sub-stations,
Waste storage facilities:

risk if within 50m=3; risk if within 100m=2; risk if within 150m=1; 

(NOTE: You can choose your own risk thresholds and importance. Also note these information downloaded from Open Source data (Cartociudad, CNIG) has not been double checked and it has been used as is).

risk2risk1

How is this risk, or these combination of risks impacting in the population of Madrid?

risk4

Can we extrapolate these patterns to other cities in the world?
We will definitely continue  this analysis shortly.

You can also visuallize this analysis using CartoDB, the field regarding “risk exposure level” is called ALL2, and ranges from 2 to 12:

Software: ArcGIS 10.3, Global Mapper 17, CartoDB

Please share if you enjoyed it… or just to say hello!

Alberto C
MSc GIS and remote sensing UAH

Risk exposure. First steps

2016/01/15

Knowing how to geoprocess features is key if what we want is assesing risk exposure. What’s a risk? Which are the risks? Where are the risks? How important a risk is?

risk-exposure-fist-steps-20160115

Jugando con CartoDB

2016/01/15

Hace ya mucho tiempo que he oído hablar de CartoDB y que vengo practicando en su página web a visualizar bases de datos sencillas.

  1. Crea una cuenta
  2. Incorpora tus datos o tómalos de la galería
  3. Selecciona en modo datos la columna que quieres simbolizar/visualizar
  4. Conviértela en NUMBER si estuviera en STRING
  5. Selecciona en modo mapa en WIZARD
  6. COROPLETAS, columna _población
  7. Visualiza el resultado

cartodb03cartodb01cartodb02

Change detection – Detección de cambios en polígonos

2015/10/22

change-detection-bogota-telemediciones-20151023-02
THE IDEA: DEMONSTRATING HOW DYNAMIC A CITY IS, THUS HOW IMPORTANT IS HAVING AN UPDATED DATASET
bogota-change-detection-20151105-02

THE FACTS: THE CITY OF BOGOTÁ IN COLOMBIA 2012-2014

Overall growth rate: -0.12% ONLY HAVING INTO ACCOUNT THE DIFFERENCE OF BUILDINGS CAPTURED BETWEEN 2012 AND 2014 (We can do this because we have used the same data capture model in both years)

(De acuerdo al censo catastral, para 2015 la ciudad incorporó 51.531 predios nuevos urbanos. En total, hay 2’402.581 predios en la ciudad, de esos, 266,9 millones de metros cuadrados son de área totalmente edificada. Source: http://www.eltiempo.com/bogota/crecimiento-bogota-/15394797)

bogota-change-detection-20151105

THE PROCEDURE: Centroids of buildings; Spatial join showing presence-absence, considering a 10m accuracy threshold, meaning if the centroid has not moved more than 10m, its the same building. If the centroid in 2012 is not in 2014, its considered as demolished. If a new centroid appears its considered new building.

DENSITY MAPS+3D buildings
Help to quickly focus on the highlights
bogota-change-detection-news-20151021

 

La geográfica cabecera de ‘Up in the air’ con George Clooney

2015/08/24

Para alguien que maneja mapas a diario es una sorpresa interesante cruzarse con esta cabecera de la película de Jason Reirman protagonizada por George Clooney ‘Up in the air’. Nubes, campos de cultivo, ciudades en 2D, 3D. Qué bien he elegido mi profesión:-)

Para alguien que viaja mucho, salvando las distancias, este vídeo, también del mismo fil ‘Up in the air’ es un simpático recuerdo de lo que es un viaje pasando por uno y mil auropuertos, arcos voltaicos, empaquetando, desempaquetando…

Espero que os guste!

Alberto

(Fuente: Jose Ignacio Sánchez de Nosolosig)

DTM validation using Google Earth (and RMSE extraction)

2015/03/10

Hi guys,

Surfing the internet is great when you need to figure out something. I needed to validate some DTM from unknown sources against an also unknown source (but at least a kind of reliable one, Google Earth).

All we need is

  • Google Earth
  • TCX converter
  • ARcGIS
  • Excel

This is the procedure i have followed:

  1. First of all we draw a path over our AOI using Google Earth, we save this as KML,
  2. This KML is opened by TCX converter, added heights and exported as CSV,
  3. CSV is imported by ArcGIS,
  4. We use the tool ‘extract multi values to points‘ to get in the same table the values of our DTM and the values from Google Earth,
  5. We use Excel to calculate the RMSE and get a quantitative result,

These are the values in our DTM

dtm-validation-02

This is the path we have to draw in Google Earth

dtm-validation-03

Using TCX converter we get the heights out of Google Earth’s DTM

dtm-validation-01

Using the tool ‘extract multi values to points‘ we get the heights out of our DTM

dtm-validation-04

We measure the differences and extract the RMSE.
Are we within our acceptance threshold or expected level of accuracy?.

You guys have to figure this out for yourselves!!!

Lost regarding RMSE calculation?. Think you have to take a look at this other post.

dtm-validation-05

dtm-validation-06

Hope you guys have enjoyed this post, if so, don’t forget sharing it.

Alberto Concejal
MSc GIS and QCQA expert (well this is my post and i say what i want :-))

Pearson correlation and GIS

2014/11/28


pearson-01
Do these two variables have a correlation?. To answer this important question first of all we have to know that only if it’s a linear relationship and there are no outliers we can take advantage of Mr Pearson’s correlation statiscal tool.

If i love chocolate, does this mean i have tendency of being chuby? or on the other hand there’s no relationship at all. Let’s figure it out.

For this particular occasion, input data XY are two DTM heights, my guess is the following: if correlation is too big, i may deduce they’re not independent products and one might been created from the other, in other words, we might have tried to cheat and we are using a different source that the one we have stated… In GIS sometimes things are not exactly as expected and there’s need to be assertive and making a plan for discovering this minor issues.

 

 

 

Let’s start from the beginning, if source 1 is the same as source 2, the correlation would be perfect, is this correct?. The answer is yes. r (Person correlation) would be = 1. So yes, if this was asking about chocolate and fleshiness this would be 100% right but this hardly or never happens in real life (direct and no other explanation or variable interaction… why is always so0o complicated?).

pearson-formula

pearson-04

With real data, you would not expect to get values of r of exactly -1, 0, or 1. For example, the data for spousal ages (white couples) has an r of 0.97. Don’t ask me where i got this weird source (well, just in case: http://onlinestatbook.com/2/describing_bivariate_data/intro.html)

age_scatterplot

If i fill source 2 with a random number, the correlation would be almost none accordingly (in this case r=0.17)

pearson-06

Now if we see the diagram of the first two sources and we get the Pearson correlation coefficient (r=0.24) which means the correlation is very weak.

pearson-03

But that was only a very small part of the table (only 30 iterations), so if i do the same calculation out of the +13,000 iterations i really need, i get these figures (by the way, theres no need to use such a complicated formula above, you can use this one in EXCEL: =PEARSON(A1:An;B1:Bn))

pearson-07

So the correlation now its moderate, which makes me deduct at least the sources seem different and i’d need more clues to think my customer might have tried to actually cheat me using the same source for both datasets.

Summarizing:

r=1, correlation is PERFECT

0.75<r<1, correlation is STRONG

0.5<r<0.75, correlation is MODERATE

0.25<r<0.5, correlation is WEAK

<0.25, almost NO correlation, both variables are hardy related

I hope you guys have found this post interesting,
looking forward to hear where could you use it and/or your feedback,

Regards,

Alberto Concejal
MSc GIS