About Me

My photo
Geógrafa pela Unicamp (2014), incluindo um ano de intercâmbio universitário na Universidade de Wisconsin (EUA). Possui experiência na área de geotecnologias, GIS e planejamento urbano, tendo realizado estágios na Agemcamp, American Red Cross e - atualmente - no Grupo de Apoio ao Plano Diretor da Unicamp.

Sunday, March 10, 2013

Navigation Part II - Map and Compass

Introduction

The last report covered the map making and preparation for the following week activity. Then, this report refers to the navigation activity itself occurred at the Priory on last Monday, March 4th. As said before, this is one piece of a bigger exercise that consists in analyzing different ways to navigate. For this week, the method of navigation was the use of a compass and a reference map. In the following weeks, the navigation with a GPS unit will be covered. The map used for this activity is the one produced in the last exercise by the group, along with the compass; the space count will also be used.

Methodology

The activity consisted in navigating over different levels of elevation, inside the woods, during the formation of a snowstorm. Thus, appropriate preparation was necessary. The first step was to dress properly to go to the field.

One important feature that was not included in the maps yet was the course points where each group would need to go to. The feature class for these points was not available on purpose, so the class could practice the technique of plotting points in a map. Then, a table with different UTM coordinates for each point was given to the students, who started to plot the points in the map (Figure 1). For that, the closest coordinate needs to be found in the grid. Then, considering the distance between grid labels, the point should be apart from the label at the necessary amount, which is an approximation.

Figure 1 - Use of table of coordinates to plot point in the map.

After having the points correctly plotted, it’s created a line between them to represent the path the group should follow. To keep track of this path, it’s necessary to have a known direction. For that, the compass was used to calculate the azimuth from/to each point. The first step to acquire the azimuth value is to correct the compass with the magnetic declination. Because Eau Claire has a declination close to zero, this procedure wasn't necessary in this case. Then, the travel-arrow in the compass is placed in parallel with the path. Holding the compass firmely, to avoid any movement, its housing is turned to be in parallel with the map north. (Figure 2) This is not extremely exact since you can only guarantee the precision by eye. One tactic is to use the lines inside the compass to compare to the grid lines.

Figure 2 - Calculating azimuth with the compass.

Having the values, a table with initial point, end point, azimuth and distance was created to keep organization of each path. However, only the first three fields were completed before the exercise. Instead of using the estimated distance each path would take – which could be done by having a ruler and calculating each path accordingly with the map scale – the amount of steps would be written there after the group walked each path.

In the field, to find the correct direction, it's necessary to turn the compass housing until the wanted value, and then turn yourself until the compass needle overlay with the compass housing "north". Then, the division of tasks within the group would help to increase the efficiency. I was responsible for holding the compass and guaranteeing the right direction by targeting a second person – Kent - who would go as far as he could and then adjust his position accordingly with my compass view. This model was chosen since the trees were almost indistinguishable one from another, so it would be hard to use it as a reference. After having the right direction using these two people, the third group mate – Joel – would walk counting steps to keep track of the distance already walked. When both arrive at the reference – Kent – the procedure starts all over again. Until by approximate calculations, it’s known that the course point was close. Following these procedures, it would be possible to find the necessary points.

Discussion


Some issues related with the precision of the measurements need some attention: in the plotting procedure as well as in the azimuth taking. In the first issue, as it was said, it's extremely rare that points would fall exactly in the intersection of the grid lines. Then, it's necessary to rely on approximations. At this moment, the interval of grid lines shows its importance. Of course that a low interval can clutter the map, but a big interval also compromise the precision in this task: the closer the grid lines are, the more precision you will have. Considering that, the choice of the group of dealing with 20 meters intervals can be considered consistent.


It's possible, even with large intervals, to have a high precision plotting. After calculating the relation between the coordinate labels and the coordinate you need to find, using a ruler, you would plot in the exact place it should be. However, this procedure is time demanding and such precision was not totally necessary with a 20 meter interval.


The precision is subject to the matter also when calculating the azimuth. Since there's a ruler in the compass, it's possible to place it exactly in the parallel with the path. However, the positioning of north can only be measured by the eye. As said before, the existence of north-south lines inside the compass were useful to be compared with the grid lines, which increased the precision of the measurement. Although it's still not completely accurate, the error shouldn't be larger than 5 degrees, what doesn't compromise the navigation in small distances as the ones this exercise deals with. It can be a huge problem, though, when dealing with enormous distances, as in some centuries ago with the marine development.


There's also another problem that didn't compromise the navigation only because the distances were small. The professor advised that, although he asked for an UTM grid, the correct way to navigate with a compass is using the Geographic Coordinate System (GCS). The reason for that is that UTM doesn't have a true north, since all the lines are equally apart from each other. In the real-world scenario, the closer to the poles, the closer the lines would be from each other. That's the way GCS grid lines would be (Figure 3).



Figure 3 - In the top a UTM grid for the United States; in the bottom, a GCS grid. The second has a curvature representing the true north of the map, while UTM doesn't.

It's necessary to have that because the compass relies on a true north, that is the one being showed by the needle. If your north is distorted by the map, that can compromise the accuracy of the measurements. Thankfully, since the area of interest is small, the curvature of the grid lines wouldn't be too high, so this specific navigation was not compromised.

Another matter that could have been improved is the organization of the table made after the point plotting and azimuth taking. As said before, three fields were completed, but the fourth that would have the distance was used as a reference to keep notes of how many steps the group gave.


It would be interesting if instead of having the distance after walking through it, the group had the calculated distance from the line. Using the scale, it would be possible to have it in meters, and then, using the pace count of Joel,  this could be converted to steps. Then, it would be easier to know how far the group were to the target point. Unfortunately, a ruler was not available at the time and the time was short to do this.


Another concern with the distance is that the pace count was obtained in a flat surface without snow. A totally different surface was faced in the field: steeps with about 30 cm of snow. The pace differs in this case, being usually smaller than in a regular surface. That means that the groups should walk more steps than the amount calculated.


Also in the subject of the adverse conditions, it was possible to notice that more preparation need to be made to go back to the field. It was challenging to walk over the deep snow in the steeps, and since the exercise was inside the woods, the dense presence of branches was constant and frequently impacted the pathway of the group. (Figure 4) Rarely it was possible to follow a straight line, so a lot of times, it was necessary to contour any obstacle and go back to the right direction.



Figure 4 - Impact of natural issues on the navigation.


Following all the procedures mentioned until now, it was possible to get from the initial point to the second point, even though the group location was a little east misplaced. This fact brought the idea that some error might have been done in the direction.

In the field, the group discussed about the reasons for that, and it was thought that it was necessary to target the azimuth in longer distances. The reason for that is that if a small error is done in a first measurement when targeting  it's carried on in all the next ones. Then, the less stops the group would made, the less error it would have.

With that in mind, the same procedure was made to find the second point. After walking the necessary distance, the group faced a deep ravine. Then, the map started to be analyzed to support the identification of the area. Unfortunately, a misinterpretation was made: by examining the contour lines, it looked like the point would be in the top of the ravine. (Figure 5) Then, one kept at the correct point, given by the compass, and the others would go look around to see if they would find the point. A long time passed doing that until Martin Goettl - one of the instructors - came and showed that the point was not on the top of the ravine, but on its bottom. 


Figure 5 - Location of point 3 on the contour lines.

After analyzing better the contour lines, it was possible to notice that this was correct and could have been avoided. Two lessons were learned with that: do not doubt your compass - errors might be made, but they would not put you that far of track, as long as you keep careful with it. And also to analyze more carefully the information in the map, it doesn't help to have a very detailed map if the reader doesn't take the necessary time to interpret it.

At this point, it was already more than 5:00 PM, so Martin decided to take the group to the next point while still wasn't dark, and then go back to avoid the darkness.

Conclusion

The main learning obtained in this exercise was about the compass reliability. It was disappointing not to accomplish the goal of the project, however, it was important to make this mistake and recognize it. That way, it's guaranteed that the idea behind the error is understood and it won't be committed again. The idea of misplacement by user lack of precision is right, however, it wouldn't cause big consequences, so it's always essential to trust the compass and don't doubt that much the location you'll be placed.

The contour lines were part of the map to support the area identification, and they would be really useful if the correct analysis was made. Another lesson comes with that: only put information in a map if the reader is able to understand and interpret it correctly, otherwise it can be more confusing than helpful. In the case of this project, the users were knowledgeable about the interpretation of contour lines, and the problem was that it was necessary to be more careful in the reading. However, it's important to understand this relativity of information provided in a map, depending on the target audience. If something is supposed to be released for the public, it might not be a good idea to insert technical concepts and features.

For last, it's important to recognize the positive sides of using a compass and map: even inside the woods, the precision is not compromised. Sometimes, when inside a really dense forest, a GPS unit can have an error too high or even not acquire satellites enough to display the coordinates. However, technology keeps developing more and more to avoid these problems. The downside is that to navigate with compass and a map is a very time demanding technique. Nowadays, the world requires efficiency at a high rate, so more practical solutions replace this method.

Saturday, March 2, 2013

Navigation Part I - Preparation


Introduction

A huge part of geography is based in the old school techniques of navigation. In the past, the research for improvement in the navigation methods was the main reason for the development of geographic knowledge and exploration. Nowadays, for navigation, the technology of GPS is predominant for almost all the purposes. However, how it was already mentioned in earlier reports, it's essential to have the knowledge of the alternative ways, so you don't fully depend on this kind of technology, since it can fail on you.

Accordingly, this exercise consists in navigating at the priory using different methods: at first, a navigation map and a compass; in following week with a GPS, but no map; and for last, in the last week, with the GPS and a map. The goal is to find the advantages and disadvantages of the tools used and which ones are essential. However, before any of those navigations be done, it’s necessary to get prepared for it. Then, in this report the main elements to be prepared to go on the field will be explored, by elaborating the necessary material.

Methodology

For the first week of navigation, with compass and a navigation map, it’s necessary first to produce the actual navigation map with the elements necessary, and since there won’t be any device to calculate the distance, the measurement of the personal size of each step is made.

The method to find the actual size of each step is by creating a known distance line – in this case of 100 meters – and walk by it for multiple times counting steps, until it doesn't have reasonable discrepancy.  To measure how much was 100 meters, the laser device was used: one student (Amy) was targeting and another student (Me) was moving and being targeted until it reached the necessary distance. (Figure 1) Once reached, the spots were marked with snow, so the students would know where to start and finish. After having the result of the number of steps, a simple calculation will result in the size of the step, which in this case was 1.45 meters per step.


Figure 1 - Right: Amy using the laser device to target the distance of 100m. The snow on the floor worked as the start point for the pace count. Left: The path of 100 meters while I was being targeted by the laser to guarantee a precise measurement.


In the production of the map, the location where the navigation will be made had to be analyzed. Different elements can work as a reference depending on the place you are. If it’s an urban area, maybe building and streets are important reference features. However, the priory is located in an open field, with basically only vegetation around and very few buildings. Thus, the reference in this case has to be the natural features, as vegetation types and elevation.

After identifying the important elements for the map, the related data have to be gathered together. For that, it’s important to evaluate the different sources one can access data. For convenience, the main data was available for the whole class. However, to find data related to elevation and vegetation, a good source would be the USGS, within The National Map Viewer. For this map the Digital Elevation Model was obtained from USGS, however, the two feet contours for elevation were obtained by UWEC survey, at the time the university bought the area. The imagery was obtained by the Wisconsin Regional Orthophotography Consortium 2010 (WROC 2010).

With the data gathered in ArcMap, the challenge is to insert the most of useful information as well as not making the map very polluted and busy. For that, some cartography techniques like transparency and change of colors were used to maintain most of information, but emphasizing only the ones that were essential. Since the work was being done as a group, each one started its own map (Figure 2), but after seeing the progress of each, the whole group focused in one of the maps, giving suggestions and improvements.
Figure 2 - Individual Map

The purpose of the map is the main element that has to be remembered while working with different priorities in the cartography. Since the objective was for navigation in a certain portion of the map (Area of Interest in the Figure 2), the main references were prioritized, while areas outside this area could have other map elements like north arrow, scale and others.


Discussion

In this project was possible to experience how troubleshooting enforces knowledge of a given subject, in this case, projections. One of the most important features – the 2 feet line contour – didn’t have the projection information. In Arc Map, by adding this data to your session, the on-the-fly projection will automatically put the feature in the data frame projection. However, it will just work if the data frame projection is the same as the one the feature was created. Otherwise, the coordinates won’t make sense and the feature will be placed far off.

With this kind of issue, it’s also common to make confusion with two different tools inside Arc Toolbox: Define Projection and Project. The first will simply label the feature with a projection, overwriting the last one, but it won’t change its coordinates. The second changes the coordinates, using complex math, creating a new feature, projected. In this case, one could think that it was necessary to run the project tool, but that wouldn’t be possible since the feature is not even labeled yet.

Thus, it was necessary to analyze the extent of the data, going to its properties, in the source tab. There, you would have the extent numbers, however, without its units (Figure 3). By looking at that, it’s easy to detect if it’s a projected or geographic coordinate system because of the units – big numbers for projected (meters or feet) and small numbers with a lot of decimals for geographic (degrees).


Figure 3 - When dealing with a undefined coordinate system, it's possible to interpret the extent coordinates and try to find the correct projection.

Then, it was known that it was a projected coordinate system, and because of the purpose of the map, that this feature was located in Eau Claire. Considering the main coordinate systems, there were some options: UTM, Wisconsin State System and Wisconsin State Plane System – Central. The first thought was about UTM, but in this system, the Y coordinate in the north hemisphere means the actual distance of the place from the equator. Y coordinates about 476 kilometers didn’t seem right for an Eau Claire location. 


Since the feature has an undefined coordinate system, it’s not a problem to overwrite the label over and over to make tests using the Define Projection tool. Then it would be fine to test the UTM coordinate system, even though it seemed odd. However, the data format was DWG, so the Define Projection tool couldn’t simply be used on Arc Toolbox. It was necessary to go on Arc Catalog and define the projection for the entire dataset. Once done, the results showed that UTM was really not the right projection.

Figure 4 - In the same coordinate system, there's multiple variations of units and datums.

By looking again to the extent, it sounded reasonable for the Wisconsin State System. However, there’s a big list of Wisconsin State Systems, depending on the datum or units (Figure 4). As mentioned before, there’s no problem in testing different projections, so the first test was made with “NAD 1927 Wisconsin TM (Meters)”. Finally the feature was placed at least closer than Eau Claire, however, it was still too far off the Area of Interest. Even if a reference as the Area of Interest was not available, it would be possible to perceive the misplacement of this feature by interpreting the contour lines along the landscape in the satellite image: they don’t fit at all. (Figure 5)

Figure 5 - The feature was placed far to north-east from the Area of Interest. Plus, in the zoom of the contour lines, it's possible to notice how they didn't fit with the landscape of the satellite image.

That’s where the knowledge not only about coordinate systems and projections, but also about datums are useful. Different datums (NAD 1927, NAD 1983, WGS84) don’t match each other, even if you have the same projection and coordinate system. Hence, even setting up everything in the right properties (coordinate system, projection and units) won’t help if the wrong datum is chosen: the features will be far off, as it happened in this project. The right datum in this case was NAD 1983, which was discovered in the next test: the feature fit the landscape and fell inside the Area of Interest.

Furthermore, after dealing with the data issues, it was interesting and challenging to deal with a considerable high amount of information in the map, without making it cluttered and busy. The contour lines could easily be less emphasized by adding some transparency to it, or changing its color to a lighter gray.

For the grid, it was a little more complicated than that. The goal would be to plot point on it, so the finer the interval, the more precise the plotting would be. However, the lines of a small interval could almost inhibit the readability of the actual features in the map. Some intervals were firstly tested and the decision of keeping with a 20 meter interval came after editing it in a way it would lose emphasis in the map. That way, the lines were changed from continuous lines to dotted lines. To avoid confusion with the contour line, the color was changed.

Conclusion

This exercise exposed two faces of map-making: the strict rules of data sources and the flexibility of cartography. It was important to reinforce the dynamic of geography and how everything needs to be thought in its different purpose and background.

When dealing with coordinate systems, projections, datums and units; you have to be very exact and certain of the information to keep data reliability and accuracy. Technology can often support this task not to be so exhaustive, as seen by the use of on-the-fly projection, what usually works to match two different features in the same place, even though one might not have a defined projection. However, this exercise was an example where this tactic didn’t work. Thus, when obtaining data, it’s crucial that you always have the necessary information to deal with it. Unfortunately, it’s not common to find all the necessary details in the metadata, so it’s essential to keep track of the data sources and always get the necessary information about it.

However, a more flexible task was also experienced by using the cartography to manage the map-making. In this area of geography, there’s no such rigorous since there’s no right and wrongs, but a number of possibilities that can meet your goals of presenting data. It doesn’t  mean that anything that is done will be correct, it’s always fundamental to focus on the purpose of the map and test different forms of dealing with different priorities. That can be challenging, exactly because there’s not only one right answer and it’s commonly a trade-off.

Summarizing, the results of the exercise were satisfactory, but mainly, dealing with these challenges was a great experience to improve and reinforce our geographic knowledge.

Sunday, February 24, 2013

Distance-Azimuth Survey


Introduction

Nowadays, although technology is mainly used to spatial data collection, it’s not always available for everyone. Sometimes you can have the necessary equipment, but depending on regulations of the place the collection will be done, authorities can take it from you. In other occasions, the equipment cannot provide accuracy enough because of the natural conditions. Also, technology tends to be expensive, so not everyone is able to have access to it.

With that in mind, alternative equipment to collect data was presented to the students. The compass, distance finder or the laser device were used in the old days when the technology wasn’t as available as today. All the students were able to put the hands on the equipment and collect simple information outside, as a way to understand how the technique works. After that, the information was transferred to ArcGIS, where it was possible to analyze the points collected and to have an idea of the accuracy involved with the procedure and how it would be possible to improve it in a second collection.

Thus, each group chose one of the equipment, a site and a theme to map. This report will cover the collection of fifty trees in the Owen Park and its heights, using the laser device. The goals of this project are to acquire knowledge of how these alternative methods work and be able to understand and compare the pros and cons of each method.

Methodology

The equipment introduced give two main methods to acquire data: one is using the compass to get azimuth and a distance finder for the length between the observer and the object; in the other hand, the laser device gives both in the same equipment. Although there are technicalities that differentiate the methods, both lie on the concept of azimuth and distance. The idea is that as long as you have an accurate coordinate of your position, you can infer other positions using direction and length.

The direction is based on the azimuth, which is the clockwise angle between the north and your direction. The Earth magnetic field allows the compass to show where the magnetic north is, and by that, it’s possible to infer the azimuth. It’s important to understand that the magnetic north is different from the true north. The relation between them is called magnetic declination and it depends on the year and location you are, so adjustments have to be made. Since in Eau Claire the declination is close to zero, there are no big concerns in this matter, but it’s important to be aware of it.

Starting the first method, the compass will be corrected, if necessary, and it will be towards the object being mapped. The number observed will be the azimuth, and it will be noted in a table. Standing in the same position, one will hold the part I of the distance finder while the other will go to the object holding the other part of the device. The result will be the distance between the two parts, which will also be noted in the same table as the azimuth, but in a different field. Names and other attributes for each feature are collected at this time, using the table, to maintain organization.

The data collection in this project used the second method, which resembles the first, but it’s more convenient. The same data will be collected with the same organization in a table, however, with a different device. An internal compass provides the azimuth when directing the equipment to the target. For the distance, the laser emits infra-red energy pulses that will hit the target and return. The calculation of the time it takes, considering the speed of the pulse, will give the distance. Also, the laser device provides the option of calculating a vertical distance, which was used to acquire information about the tree heights. In this case, a tilt sensor calculates the angle between the straight line to the target, and the calculations result in its height.

Thereby, it was planned to collect fifty trees within the Owen Park between 2:30-4:00PM of February 20th, the decision of when to collect took in consideration not only the students availability, but also the outside conditions – as temperature and precipitation – to minimize its impacts on the collection. The temperature was approximately -10°C, being reasonable for collection and there was no precipitation at the time.

Four street corners were used as origin points (Figure 1) and marked in an aerial image taken to the field. The division of tasks were settle in a way that in 25 points, one would be using the device and the other taking the notes in a table, and in the other half, the tasks would be switched. The technical modes used on the device were azimuth (AZ), slope distance (SD) and vertical distance (VD), with a precision of one decimal place. In the AZ and SD mode, the device would be hold straight to the trees, targeting its trunk. For the VD mode, the crown of the tree would be targeted, trying to reach the most thick and high branch.

Figure 1 - Andrew using street corner as a reference.


After the collection, the data should be normalized to meet the standards of the tools used in ArcGIS: Bearing Distance to Line and Feature Vertices to Points. For that, a table in Microsoft Excel was created with the essential fields (Figure 2). The data related to the origin points were collected using Google Earth. The ArcGIS help doesn’t specific in which coordinate system and units the origin points should be, so the Geographic Coordinate System in degrees was tested and worked fine in the first test, so the same standard was used at this point as well.

Figure 2 - Table creation with appropriate fields.


The “bearing” field has the information in degrees about the azimuth. The command works by creating a line staring from the origin point in direction of the azimuth and having the length of the distance field, as presented in the Figure 3, where the “0 degrees” can be considered the true north.

Figure 3 - Geometric method to locate points with azimuth (bearing) and distance.


After running this command, the Feature Vertices to Points simply create points in the vertices of any feature inputted. The result, however, includes repeated origin points, since they were vertices of the previous feature. They should be deleted because the feature class is supposed to have only tree points.

Discussion

The collection section of this exercise can be considered successful, but some problems found should be discussed. Firstly, to target an exact point is necessary to keep the device totally still, which was compromised even by the slight shake (Figure 4). This problem would be increased when the trees were far or its trunks were thin. A simple way to solve this problem is to have some sort of mobile tripod, which would guarantee the stability of the equipment. Unfortunately, this problem was only noticed already in the field, so this extra equipment was not available.

Figure 4 - Beatriz trying to using the device without slight shakes.


A similar problem was increased when dealing with the vertical distance. At this time of the year, the crowns of the trees are totally without leaves, and the branches were extremely thin and similar between all the trees. Then, it was hard to identify while targeting, if the branches observed were from the target tree or from another in front of it. It was only possible to notice the errors after targeting, by obtaining some non-logical results as three meters for an extremely tall tree. The way found to solve this problem was to move closer to the trees targeted, since the height result is not compromised by the origin point.

Next, the information collected had to be transferred into ArcGIS. For that, it’s important to emphasize the importance of editing and using the default geodatabase in the document properties. Sometimes ArcGIS run in some problems with saving in a place other than your default geodatabase. If you are not aware of which geodatabase is being defaulted, you might encounter this problem.

The use of ArcHelp was essential to find the appropriate tools that would be used and understand the details pertinent to it, especially before building the excel table, so the coherence between the commands and the information provided in the table was guaranteed.

To obtain the origin points coordinates, it’s important to think about the units and precision being used. Usually, the standard is to use one or two decimal places of precision. However, the project is dealing with a large-scale map, so displacements are easier to be noticed. Also, the input of the coordinates is in degrees, which are not easily understandable as a distance measurement. Thus, it’s necessary to calculate the meaning of 0.1° in the site to be aware of how the precision can affect the results. That involves the extent of the circumference of the Earth and the latitude of the site. In Owen Park, 0.1° represents approximately 8km. Thus, if a precision of only one decimal place was used, the trees could be placed in the other side of the city. That’s why a precision of six decimal places were used in this project, which gave reasonable results (Figure 5)

Figure 5 - Trees Locations using ArcGIS tools

However, by analyzing the results, it’s possible to notice that they are not totally accurate: the trees 1, 2, 3, 6, 8, 26, 43, 44 and 46 were placed in the middle of the street, which doesn’t represent the real-world situation. Then, some explanations for the inaccuracy were considered.
At first, since the direction of the lines seemed incorrect, the precision of the azimuth was considered as an issue. It’s true that the lack of precision has more effect when the distance is higher, so it might have caused problems with the trees collected from further distance. However, after calculating the margin of error (Figure 6) of every three based on the 0.1 level of precision of azimuth, the maximum error would be only 10 cm, so another reason for the inaccuracy had to be found.

Figure 6 - Calculation of the Margin of Error (using trigonometry formulas)

The data related to the Area of Interest that support our analysis should be collected in a scale as big or similar as the site. However, the basemap used to compare the results obtained was probably produced in smaller scales, covering a much larger area. The basemap also was simply imported from ArcMap standard database, so there’s no information about the quality or scale. It’s not possible to determine if the process of orthorectification, where the aerial image would be geometrically corrected to have an uniform scale, was applied to the photograph. With that in mind, it leads to think that the aerial image is distorted in some way, especially when noticing that if the image was slightly rotated in the clockwise direction, all the trees placed in the streets would be in their real place.

Another reason that interfere the quality and accuracy might be the presence of particles in between the collector and the target, misguiding the equipment to hit a different target. The laser is based in infrared light, so the wavelength is somewhere in between 0.7µm and 1mm. Then, objects bigger than that might affect the result obtained. This reason was though because during class, it was snowing and some weird results were found. The size of a snowflake is usually higher than 1mm, so the odd results might be caused by that. Thinking that the air might contain particles in that range of size, which cannot be easily seen, it’s not impossible to consider that collection can be compromised by that as well.

For last, it’s interesting to notice the results related to the height of the trees (Figure 7). Since there was no supporting data as the basemap for the locations, it was not possible to test for inaccuracies, however, considering the range of sizes and their distribution, it can be said that the laser device worked reasonable to acquire that information.

Figure 7 -Heights of the trees in Owen Park


Conclusion

The use of technologies for data collection, as GPS, is doubtless more convenient, practical and fast. The test for accuracy can be done more easily, by checking the PDOP of each collection. However, there’s some situations where the alternative methods can be even more precise than the GPS. For instance, in a dense forest in a cloudy day, it would be extremely hard to obtain a reasonable PDOP for the collection. However, the methods described in this project are not affected by these variables, since there’s no signal coming from a satellite. The only thing that could compromise its work would be a magnetic change, which is not considerable, or the problems discussed before, about human limitations and the laser wavelength. However, the collection with this method is time demanding and requires more knowledge from the user.

Therefore, the method used for one collection should consider all these variables to reach to the decision of what better fits its purposes. Rarely the alternative methods would be used, it’s impossible to deny the predominance of GPS. However, it’s immensely important to be aware of those methods and know how to use them in case of the unforeseen. Technology should not be neglected, of course, but we should not rely totally on it. 

Sunday, February 17, 2013

Balloon Mapping I - Preparation



Introduction

Satellite images and aerial photography are used for mapping purposes mainly by government institutions and people in power. Thus, the general public usually depends on their releasing information, which can be distorted by different interests, since the obtainment of satellite and aerial photography is expensive. Also, the images are usually out of date and do not represent the current status of the land. Alternative sources as Google Earth are valuable; however, they’re not systematically collected nor released which doesn't meet the scientific purposes of a project.  

The Balloon Mapping Method consists in using a camera in the continuous mode elevated by a helium balloon to acquire aerial images of the surface. It’s a low-cost alternative to obtain updated and specific data of an area with a reasonable level of detail. In this project, the class is working together to build and organize the balloon mapping and the later High Altitude Balloon Launching.

Methodology
Figure 1 - Payload Weight


Different kinds of materials were gathered and the students had to find out, using their creativity and some tutorials available, how to create the equipment properly to flight with the appropriate conditions. There was different areas working separately with the construction of a mapping rig and of a High Altitude Balloon Launch (HABL) rig, testing the parachutes, weighting the payload of both HABL and mapping rig (Figure 1), designing and implementing continuous shot on the cameras, as well as implementing and testing the tracking device and planning how to fill the balloon and securing it to the rig.

Whereas the students divided themselves in different groups with different purposes that will be connected at the end, this report focuses mainly in the construction of the mapping rig as well as the design of implementing continuous shot on the cameras. The specific goal for this section was to build a structure that properly protect the camera in case of a fall, without compromising the picture frame and guaranteeing the continuous mode with an automatic trigger.

Initially, the continuous mode needed to be found and evaluated in the available cameras. The type of picture taken and the shooting rate had to be considered to find the most appropriate mode for shooting. Plastic bottles were used to be the main protection for the camera; however, since there were different sizes of cameras and bottles, it was necessary to find the proper match between them.

Having that, duct tape, strings and rubber bands were used to hold the camera inside the bottle in a way it wouldn’t fall, but keeping in mind that the bottle must not appear in the picture frame, reason why, tests were made to guarantee that. Also, the permanence of the continuous mode depends on a trigger that keeps the button hold for the whole flight. Thus, the use of small eraser pieces and a knot in the rubber band were tested to find which one would work better to maintain the mode active.

Discussion

Firstly the continuous mode was being tested in three cameras. Since they have different software templates, this setting was displayed differently in each camera, thus, it took some time to set all of them in the same mode. 

After a size match with a two litter soda bottle, only one camera was the focus to further adaptations. The camera fitted inside the bottle tightly, reason why it was tested inside a bigger bottle as well, however, in the last one, since there’s more space available, the movement of the camera is increased, which can be cause blur in the pictures.
Figure 2  -  Second and bigger rig, it's possible to notice the space inside it, where the camera can move around.
Hence, there’s two options and their pros and cons: the first rig can provide stability of the camera, minimizing the blur of the movement, but the protection is minimized because it directly contacts the plastic; in the other hand, the second rig (Figure 2) can provide full protection, but the movement of the camera is higher, as well as the blur in the pictures. More tests related to the safety and picture quality have to be done to judge which one is more appropriate for the project purposes.

Something similar happened with the design of a trigger for the shooting button. At first, a small piece of eraser was used together with the rubber band to push the button continuously. However, with the strong pressure of the rubber band, it kept falling off repeated times. Then, it was decided to create a knot in the rubber band and test if that was enough to keep the button pressed, what resulted in a more convenient design. The only challenge in both cases was to deal with the zoom tool of the camera, placed together with the shooting button (Figure 3). All the times that the rubber band is being placed on the button, it activates the zoom mode, which is something not intended for the project. Hence, caution is necessary when activating the camera for shooting, so it doesn't activate the zoom as well.
Figure 3  - At the left, the eraser trigger causes prominence of the rubber band  on the button location, while at the right, the knot pushes the button without excessive pressure. In both cases, it's possible to notice the zoom tool attached to the button.



Conclusion

For the purposes of testing, planning and defining which materials and designs are more efficient and appropriate for the mapping and launch, the exercise was successful, although it’s necessary more adjusts and connection between each different section. It was a great exercise to organize a big group in a singular goal. The division of tasks worked effectively, yet not being totally separated: since everyone was together working at the same time, it was possible to presence other’s plans. Also, the flexibility and freedom to create the design for the rig exercised creativity by the trial and error of different ideas.

Though the exercise consisted more in planning the future mapping than actual collecting data, it’s important to realize the meaning of this project as the possibility of acquiring high resolution and significant images by the general population, instead of being concentrated only in positions of power. In this case, it’s only being used cameras with the visible portion of the electromagnetic spectrum, but the near infra-red could be used as well. Digital cameras are able to detect reflectance coming from this portion of the spectrum, but the manufacturers intentionally insert a NIR filter inside it. Its removal involves more complicated and technical – yet possible – procedures with the camera, reason why it was not included initially in this project. However, it’s an option that would give even more information, for instance, to detect vegetation health. Therefore, this project shows how the technical geographic knowledge allied with the creativity can enable the obtainment of valuable and extensive data with simple equipment.

Sunday, February 10, 2013

Sandbox Elevation Model Continued


 Introduction

In this project, the preliminary data collection was analyzed using 3D tools, working with the different types of interpolations showing where improvements could be made. With that, the evaluation of how a re-survey refining our data would better work was made. The second collection was made and the data manipulation with ArcMap, ArcToolbox and ArcScene resulted in a consistent representation of the landscape designed inside the planter box.

Methods

After the data collection, it’s necessary to format the table using Microsoft Excel so each point has x, y and z columns. With this format ready, the creation of a feature class using this coordinates is possible in ArcCatalog or ArcMap.

With the point feature class in ArcMap, it’s time to create a raster related to the Z value of the points. Five interpolation methods were used with ArcToolbox to create rasters: IDW, Kriging, TIN, Natural Neighbor and Spline. These raster files are necessary to provide a 3D view using ArcScene.

The 3D view allows the analysis of the accuracy of the results in comparison with the real model in the planter box. With that, it’s possible to decide what might be changed in terms of density of points and which interpolation method best represent the features designed in the box.

With this in mind, a second collection is necessary to refine the data and obtain better results. For this survey, strings were used to create the grid where the points would be taken. Nails were placed with a hammer (Figure 1) to fix the strings, avoiding the latest problem where the negative temperature compromised the effectiveness of the tape that was being used. Also related to the temperatures, the day chosen to collect the data was warmer than last time and the strings were cut inside the building to minimize the amount of time outside.

Then, the same computer process was applied: formatting table, creating a feature class, transforming it into a raster and visualizing it in three dimensions with ArcScene. The final result will be a digital model that reasonably represents the landscape designed in the planter box with accuracy.


Figure 1 - Use of nails and hammer to fix the strings.
Results

As it was seen in the last report, the note-taking was made following the box shape, instead of having three different columns x, y and z for each point. Hence, it was necessary to create a table (Figure 2) in this format to allow the creation of a feature class with the data.

Figure 2 – Fixed format of coordinates.

Then, the feature class was created and used to create five rasters using the different methods of interpolation. Using ArcScene, the visualization in 3D (Figure 3) made it visible which methods represented better the surface and which features in the landscape needed more detail.

Figure 3 – Results in three dimensions.

Unfortunately, none of the methods were able to represent one of the features designed in the box: the river flowing in the right-upper corner of the Figure 4 was supposed to be represented with minor elevations in the left-lower corner of the 3D images in the Figure 3. Some points in this area does have the pattern of low elevations, however, they don’t have continuity as the river should have.  

Figure 4 - Landscape in the planter box with the river.
Therefore, the density of points being collected was the first change for the second survey. At the first survey, the point collection was being done starting from the origin with 6 inches intervals to east and north. Thus, the new collection was done starting from the origin with 8cm intervals. (Figure 5). The International System of Units was now applied since it fits better the scientific purposes of the project. (Figure 6)


Figure 5 – Comparison between data collection methodologies.

Figure 6 - Use of SI to measure the intervals.

Then, the collection was made collecting points of the strings’ intersections. (Figure 7)
Figure 7 - Elevation data collection.
With the data collected in the notebook, the table was formatted in Excel (Figure 8) and used to create the feature class. Using ArcToolbox, the different interpolation methods created five raster files that were visualized in 3D in ArcScene. (Figure 9)
Figure 8 - Notebook data formatted to an excel table.

Figure 9 - Visualization in 3D of different interpolation methods.

Conclusion

The analysis of the 3D model results indicate that the spline method was the one that best represented the surface designed in the box. The reason for that is that there’s a higher generalization in the unknown areas that smoothly represents the landscape. In the other methods, it’s easy to notice some geometric shapes that are repeated over the surface, which doesn’t represent the real-world. Besides the spline method, the natural neighbor method also represented reasonably the landscape.

Also, the increase of points collected clearly shows the improvement in the representation over the surface. The river is now well lineate and the heterogeneity of the surface is more apparent.

The project as a whole showed how a preliminary collection can improve fairly the final collection and result.

Sunday, February 3, 2013

Sandbox Elevation Model

Introduction

The first assignment of the class consisted of the construction of a fictitious landscape inside the Phillips's Courtdyard Garden Planter Boxes to model a real surface. It also included the later survey of coordinate points, including elevation (x,y,z), to transfer the elevation model to ArcGIS.

However, instead of having a step-by-step tutorial with all the procedures to complete the tasks, we were suppose to use our creativity, so we had freedom to decide how we would do this. 

The exercise worked as a miniature of a real fieldtrip, where you need to plan how you'll obtain your data, which tools you'll use, how the outside conditions may interferer in your collection and find out how you'll deal with that.

It will be possible to learn how to improve the data collection in a real-world situation, by the evaluation and experience obtained in this exercise.

Data Collection
Figure 1 - Data Collection
Methods

Firstly, it was necessary to plan how the data collection would be done. In a preliminar meeting, the group decided some specifications of the coordinates, such as where the origin would be located and which level of detail we would use, or in other words, how big the collection interval would be. Then, it was decided to create imaginary 6x6 inches boxes with the strings and collect the central point of these boxes. This model was drawn in a notebook where the elevation points would be written on.

For that, a big ruler, measuring tape, string and a knife were necessary. We went outside on January 31 afternoon, and then we started creating the landscape with the snow available. We created hills, mountains, rivers and plains. With the measuring tape and the knife, we marked the 6 inches interval points from the origin (Figure 1).

We tried to tie the strings in these marks, however, the tape we got for that didn't work as we expected, so the strings were not getting fixed in the wood.


Then, we took advantage of the big ruler we had, which could work as the string, and we positioned it in each two points of the planter box, so with the measuring tape, we could collect the points every 6 inches inside of that imaginary box.



After the collection, the data were typed in excel, following the idea of the boxes. With the data, it was possible to calculate the x and y coordinates. The numbers obtained for the elevation were multiplied by -1 to reflect the idea that the top of the planter box was the mean sea level.

In total, 98 points were collected, as it's possible to see in the Figure 2. The next step is to input this coordinates in a single table and import to ArcGIS, which will be done in the next exercise.

Discussion

This exercise was really productive considering the learning aspects. Since we had to figure out by ourselves the procedures, we had the opportunity to try different methods and notice the ones that would work better. The self-analysis is crucial in this step, because after the work was done, we had a lot of lessons.


The first big lesson was to be really aware not only of the outside conditions, but of the period of time you're going to be in these conditions. At the time we were collecting the data, it was -15ºC, which is an acceptable temperature to walk from a building to another. However, the collection last much longer than that, compromising our work. Thus, in the next projects, a good strategy is to be outside for a while to test how these conditions might compromise your work or not, and then, depending on how it's, to adapt - in this case with more layers and gloves - to avoid the loss of productivity.

Another important lesson is to plan ahead and leave everything prepared before you go to the field. That's more important when the outside conditions are challenging, but it's still truth in any occasion. We need to cut the string pieces with the right size for the box, but it was a lot of strings and we did that outside. Considering it was extremely cold, it would be better if we have got the right measurement and cut all the strings inside. So, it's really important to plan carefully and think about details like that, sometimes they're small and won't compromise the project, but in more challenging situations, it can be a real problem.

However, these obstacles we faced were seen by us as challenges that we felt happy to overcome and learn with them. It was a good experience to work as a connected group, despite the division of tasks. Also, the freedom of ways to collect the data made us more flexible. Hence, when a unforeseen problem happened - as the tape not fixing the strings - we could quickly improvise a different way that would work as much as the other. This allowed us to exercise our creativity and learn how to deal better with the unexpected things that happen in the field.

Conclusion

The exercise should be considered successful in its objectives. We were able to plan ahead and understand better which techniques we could use to collect data at the field. By executing, the challenges worked as a way to teach us different ways to deal with it and still get the necessary results. For last, by registering all the work in a report, it was possible to evaluate the experience as a whole and to learn better ways to collect the data in future field-works.