Python: Geospatial Environment Setup (Part 2)
Hey again folks! I am here for the second part of Python environmental setup for a geospatial workspace. I published the first part of this post two weeks ago. So if you've not yet read that, I'll catch you up to speed with our checklist:
Install Python ☑
Install Miniconda ☑
Install the basic Python libraries ☑
Create a new environment for your workspace
Install geospatial Python libraries
Since we have actually manually set up our base environment quite thoroughly with all the basic libraries needed, to make our work easier, we can just clone the base environment and install all the additional essential libraries needed for geospatial analysis. This new environment will be called geopy. Feel free to use a name you identify most with.
Why don't we just create a new environment? Well, it means we have to start installing the Python libraries again from scratch. Although it is no trouble to do so, we want to avoid installing so many libraries all at once. As I mentioned in Part 1, there is always a risk where incomplete dependencies in one library will affect the installation of other libraries that you intend to install in one go. Since we already have a stable and usable base environment, we can proceed to use it as a sort of pre-made skeleton that we will build our geospatial workspace with.
1️⃣ At the Anaconda Command Prompt, type the following:
2️⃣ Press Enter and the environment will be clone for you. Once it is done, you can use the following command to check the availability of your environment 👇🏻
You should be able to see your geopy environment listed along with the base environment.
Here we will proceed with the installation of a few geospatial Python libraries that are essential to reading and exploring the vectors and rasters.
🔺 fiona: This library is the core that some of the more updated libraries depend on. It is a simple and straightforward library that reads and writes spatial data in the common Python IOs without relying on the infamous GDAL's OGR classes.
🔺 shapely: shapely library features the capability to manipulate and edit spatial vector data in the planar geometric plane. It is one of the core libraries that recent geospatial Python libraries rely on to enable the reading and editing of vector data.
🔺 pyproj: is the Python interface for the cartographic projections and coordinate system libraries. Another main library that enables the 'location' characteristics in your spatial data to be read.
🔺 rasterio: reads and writes raster formats and provides a Python API based on Numpy N-dimensional arrays and GeoJSON.
🔺 geopandas: extends the pandas library to allow spatial operations on the geometric spatial data i.e shapefiles.
💀 As you might have noticed, we won't be doing any direct gdal library installation. It's mainly due to the fact that its installation is a process that seems to be accompanied by misery at every turn and involved workarounds that are pretty inconsistent for different individuals. Does it mean that we won't be using it for our Pythonic geospatial analysis? Heck no. But we will be taking advantage of the automatic dependency installation that comes with all the libraries above. The rasterio library depends on gdal and by installing it, we integrate the gdal library indirectly into our geospatial environment. I found that this method is the most fool-proof. Let's proceed to the installation of these libraries.
1️⃣ At the Anaconda Command Prompt, should you start from the beginning, ensure that your geopy environment is activated. If not, proceed to use the following command to activate geopy.
Once activated, we can install the libraries mentioned one after another. Nevertheless, you also have the option of installing them in one go directly using a single command 👇🏻
💀 geopandas is not included in this line-up NOT because we do not need it. It's another temperamental library that I prefer to isolate and install individually. If gdal is a rabid dog...then geopandas is a feral cat. You never know how-when-why it doesn't like you and forces a single 10-minute installation drag to hours.
3️⃣ Once you're done with installing the first line-up above, proceed with our feral cat below 👇🏻
4️⃣ Use the conda list command again to check if all the libraries have been installed successfully.
🎉Et voilá! Tahniah! You did it!🎉
🎯 The Jupyter Notebook
It should be the end of the road for the helluva task of creating the geospatial environment. But you're going to ask how to start using it anyway. To access this libraries and start analyzing, we can easily use the simple and straight-forward Jupyter Notebook. There are so many IDE choices out there but for data analysis, Jupyter Notebook suffices for me so far and if you are not familiar with Markdown, this tool will ease you into it slowly.
Jupyter Notebook can be installed in your geopy environment as follows:
And proceed to use it by prompting it open via the command prompt
It ain't that bad, right? If you're still having problems with the steps, do check out the real-time video I created to demonstrate the installation. And feel free to share with us what sort of problems you have encountered and the workaround or solutions you implemented! It's almost never a straight line with this, trust me. As mentioned in the previous post, check out the quick demo below 👇🏻
See you guys again for another session on geospatial Python soon!
With this, I am commencing my submission for the #30DayMapChallenge for 2023 🗺
The categories outlined is similar to that of last year but I am never going to hate this repetition. How can I? It's a basics of making maps and there's so much to learn from the single-word theme.
Any aspiring map-makers out there? Let's share our maps for this wonderful month of November under the #30DayMapChallenge 2023!
Ok.
I wanna know why have I never heard of this online tool before. Like, what the hell is wrong with the social media? Is something wrong with Twitter or Instagram or something that they never caught on mapshaper? Or was it just me and my hazardous ignorance, yet again?
Have you tried this free nifty online tool that literally simplify crazy complicated shapefile polygons like it’s no one’s business?!
It started with some last minute inspiration on how to collate data from 3 different regions; developed from remote sensing techniques which vary from one another. The common output here is to turn all of them into a vector file; namely shapefile, and start working on the attribute to ease merging of the different shapefile layers.
Once merged, this shapefile is to be published as a hosted feature layer into the ArcGIS Online platform and incorporated into a webmap that serves as a reference data to configure/design a dashboard. What is a dashboard? It's basically an app template in ArcGIS Online that summarizes all the important information in your spatial data. It's a fun app to create, no coding skills required. Check out the gallery here for reference:
Operations Dashboard for ArcGIS Gallery
There are two common ways to publish hosted feature layer into ArcGIS Online platform.
Method 1: Zip up the shapefile and upload it as your content. This will trigger the command inquiring if you would like to publish it as a hosted feature layer. You click 'Yes' and give it a name and et voila! You have successfully publish a hosted feature layer.
Method 2: From an ArcGIS Desktop or ArcGIS Pro, you publish them as feature service (as ArcMap calls them) or web layer (as the its sister ArcGIS Pro calls them). Fill up the details and enabling the function then hit 'Publish' and it will be in the platform should there be no error or conflicting issues.
So, what was the deal with me and mapshaper?
🛑 A fair warning here and please read these bullet points very carefully:
I need you to remember...I absolve any responsibility of what happens to your data should you misinterpreted the steps I shared.
Please always 👏🏻 BACK 👏🏻 UP 👏🏻 YOUR 👏🏻 DATA. Don’t even try attempting any tools or procedure that I am sharing without doing so. Please. Cause I am an analyst too and hearing someone else forget to save their data or create a backup is enough to make me die a little inside.
For this tool, please export out the attribute table of your shapefile because this tool will CHANGE YOUR SHAPEFILE ATTRIBUTES.
When I was publishing the vector I have cleaned and feature-engineered via ArcGIS Pro...it took so long that I was literally dying inside. I'm not talking about 20 minutes or an hour. It took more than 12 hours and it did not conjure the 'Successfully published' notification as I would've expected from it.
So at around 5.30 am, I randomly type 'simplify shapefily online free'. Lo and behold, there was mapshaper.
All I did was, zip up my polygon, drag it to the homepage and it will bring you to the option of choosing the actions that will be executed while the data is being imported into mapshaper:
detect line intersections
snap vertices
This option will help you to detect the intersections of lines within your vector/shapefile. This can help identify topological error.
The option to snap vertices will snap together points of similar or almost identical coordinate system. But it does not work with TopoJSON formats.
There is something interesting about this options too; you can enter other types of customized options provided by the tool from its command line interface! But hold your horses peeps. I did not explore that because here, we want to fix an issue and we'll focus on that first. I checked both options and import them in.
This will bring the to a page where there you can start configuring options and method to simplify your vector.
To simplify your shapefile, you can have both options to prevent the shape of the polygon being compromised; prevent shape removal, and to utilize the planar Cartesian geometry instead of the usual geoid longitude and latitude; use planar geometry. The implication of the second option is not obvious to me yet since all I wanted was to get the data simplified for easy upload and clean topology, thus, I chose both options to maintain the shape and visibility of all my features despite the highest degree of simplification.
Alike to the options of methodology for simplication in the mainstream software, I can see familiar names:
Douglas-Peuker
Visvalingam / effective area
Visvalingam / weighted area
First and foremost, I had no slightest idea of what these were. Like for real. I used to just go first for the default to understand what sort of output it will bring me. But here, the default; Visvalingam / weighted area, seemed like the best option. What are these methodologies of simplification? There are just algorithms used to help simplify your vectors:
🎯 Douglas-Peucker algorithm decimates a curve composed of line segments to a similar curve with fewer points (Ramer-Douglas-Peucker algorithm, Wikipedia; 2021).
🎯 Visvalingam algorithm is a line simplication operator that works eliminating any less significant points of the line based on effective area concept. That basically means that the triangle formed by each of the line points with two of its immediate neighboring points (Visvalingam Algorithm | aplitop).
🎯 Visvalingam algorithm with weight area is another version of Visvalingam algorithm of subsequent development where an alternative metrics is used and weighted to take into account the shape (Visvalingam & Whelan, 2016).
For reasons I can't even explain, I configured my methodology to utilize the third option and now that I have the time to google it, Thank God I did.
Then, see and play with the magic at the 'Settings' slider where you can adjust and view the simplification made onto the vector! I adjusted it to 5%. The shape retained beautifully. And please bear in mind, this vector was converted from a raster. So, what I really wanted is the simplified version of the cleaned data and to have them uploaded.
Now that you've simplified it, export it into a zipped folder of shapefile and you can use it like any other shapefile after you extracted it.
Remember when I say you have got to export your table of attributes out before you use this tool? Yea...that's the thing. The attribute table will shock you cause it'll be empty. Literally. With only the OBJECTID left. Now, with that attribute table you've backed up, use the 'Join Table' tool in ArcGIS Pro or ArcMap and join the attribute table in without any issues.
Phewh!!
I know that it has alot more functions than this but hey, I'm just getting started. Have you ever done anything more rocket science than I did like 2 days ago, please share it with the rest of us. Cause I gotta say, this thing is cray!! Love it so much.
mapshaper developer, if you're seeing this, I 🤟🏻 you!
UPDATE
I have been asked about the confidentiality of the data. I think this is where you understand the reason behind the fact that they will work even with using just the ‘.shp’ file of the shapefile since _that_ is the vector portion of the shapefile.
Shapefile is a spatial data format that is actually made up of 4 files; minimum. Each of these files share the same name with different extensions; .prj, .shx, .shp and .dbf. Although I am not familiar with what .shx actually accounts for, the rest of them are pretty straightforward:
.prj: stores the projection information
.dbf: stores the tabulated attributes of each features in the vector file
.shp: stores the shape/vector information of the shapefile.
So, as the tool indicate, it actually helps with the vector aspect of your data which is crucial in cartography.
Have you ever heard of the binning technique?
My favorite cartographer is John M. Nelson. In fact, he's the one who actually got me searching what 'cartography' really is. Fortunately, he's a mix of a storyteller/technical support analyst/designer. So, his techniques are the ones I have least trouble understanding. And this is by no means a comment meant to offend because really, I'm a little slow and John is a very 'generous' teacher when it comes to explaining things; even through replies in posts. You can witness his work first hand at his own blog posts here;
https://adventuresinmapping.com/
So, the first of his work that captured my attention is the Six Month Drought of the American Southeast map created using the binning method. I didn't even know what binning is, but the map was so pretty it had me announcing my loyalty to #cartography hashtags.
So what is binning? According to GIS Lounge, binning is a data modification technique where original data values is converted into a range of small intervals called bins. Bins will then be replaced with a values that is representative of that interval to reduce the number of data points.
Okay. It should be a no-brainer. But the data he used was the polygon shapefiles of droughts' extent and their severity. Although it is still unknown to me how USGS actually collect this data but his map is sang the deserving anthem to their hard work. But alas, I never had the chance to reproduce it. I do not have the knack of identifying interesting data of any sort, so I either am stuck with reproducing a redundant work or waste my time in a wild goose chase for data; I'm a noob with a tunnel-vision focus. I won't even vote myself if we have a jungle excursion that requires mapping cause we'll be stuck longer than necessary.
Even so, one year later, precisely this moment...I found a valid reason to attempt this. And it's all because I need to validate satellite imagery classification some colleagues made to show hot spots of global deforestation. I am not a remote sensing wizard, but vector data...now that I can work with.
Using the same binning technique, I can summarize the steps as follows:
Merge all the data of deforestation variables Generate hexagonal tessellation Create the hexagon centroids Use 'Spatial Join' to sum up the weights of overlapping polygon features of the merged data and join it with the hexagonal centroids Then configure symbology
Visualizing was a herculean effort for my brain. The map John made is a bivariate map. And compared to his data which has 2 numerical variables to enable that, mine only had one and it is the summation of the ranking weight I ensued on the deforestation variables. He merged all the shapefiles of weeks after weeks of drought severity readings. Me...I just manage this >>>
My first attempt was to just visualize the probability of the deforestation using the centroid point sizes.
That wasn't much of a success because visually, it doesn't actually appeal to my comprehension. It looks good when you zoom in closer because it gives off that newspaper print feel with that basemap. From this whole extent, it's not helpful.
So, after I tried to no avail to make it work with toggling the size and the colors, I found that instead of trying to make it look nice, I better opt on answering the questions posed by my colleague; could you identify the areas of high likeliness of prolonged deforestation? For that purpose, only hexagonal mesh would do the trick. So based on the 10 km sq size of their hexagons that depicts the areas of deforestation based on image classification, I used 'Spatial Join' too again and join the centroids back their predecessor hexagons to carry the binned values.
Et voila!
The weight summation was of the degree of prolonged deforestation likeliness and the values range all the way to 24. I made 4 intervals which gave a practical visualization. Eight intervals were pushing it and 6 was not pleasant. It could be my color palette choice that made them unappealing but too many intervals will defeat my purpose.
Yay or nay...I'm not too sure about it. But I do believe that this summarizes the areas where conservationists should be on the alert with.
After having a discussion with a colleague, yeah...this technique has a lot of gaps.
ONE; this is not a point feature. Using the values where the centroid touches/overlays ONLY is not exactly a precise method. Although, it is not wrong either.
TWO; The merged polygonal data came off as OVERLAPPING polygonal features.
Overlooking the shortcomings and just using it to visually aid cross-checking...yea maybe. Even then...it's not as laser-point precise as one would aspire. I stand humbled.
Here’s a quick run down of what you’re supposed to do to prepare yourself to use Python for data analysis.
Install Python ☑
Install Miniconda ☑
Install the basic Python libraries ☑
Create new environment for your workspace
Install geospatial Python libraries
Let’s cut to the chase. It’s December 14th, 2021. Python 3 is currently at 3.10.1 version. It’s a great milestone for Python 3 but there were heresay of issues concerning 3.10 when it comes to using it with conda. Since we’re using conda for our Python libraries and environment management, we stay safe by installing Python 3.9.5.
Download 👉🏻 Python 3.10.1 if you want to give a hand at some adventurous troubleshooting
Or download 👉🏻 Python 3.9.5 for something quite fuss-free
📌 During installation, don’t forget to ✔ the option Add Python 3.x to PATH. This enables you to access your Python from the command prompt.
As a beginner, you’ll be informed that Anaconda is the easiest Python library manager GUI to implement conda and where it contains all the core and scientific libraries you ever need for your data analysis upon installation. So far, I believe it’s unnecessarily heavy, the GUI isn’t too friendly and I don’t use most of the pre-installed libraries. So after a few years in the darkness about it, I resorted to jump-ship and use the skimped version of conda; Miniconda.
Yes, it does come with the warning that you should have some sort of experience with Python to know what core libraries you need. And that’s the beauty of it. We’ll get to installing those libraries in the next section.
◾ If you’re skeptical about installing libraries from scratch, you can download 👉🏻 Anaconda Individual Edition directly and install it without issues; it takes some time to download due to the big file and a tad bit longer to install.
◾ Download 👉🏻 Miniconda if you’re up to the challenge.
📌 After you’ve installed Miniconda, you will find that it is installed under the Anaconda folder at your Windows Start. By this time, you will already have Python 3 and Anaconda ready in your computer. Next we’ll jump into installing the basic Python libraries necessary for core data analysis and create an environment to house the geospatial libraries.
Core libraries for data analysis in Python are the followings:
🔺 numpy: a Python library that enables scientific computing by handling multidimensional array objects, or masked objects including matrices and all the mathematical processes involved.
🔺 pandas: enables the handling of ‘relational’ or 'labeled’ data structure in a flexible and intuitive manner. Basically enables the handling of data in a tabular structure similar to what we see in Excel.
🔺matplotlib: a robust library that helps with the visualization of data; static, animated or interactive. It’s a fun library to explore.
🔺 seaborn: another visualization library that is built based on matplotlib which is more high-level and produces more crowd-appealing visualization. Subject to preference though.
🔺 jupyter lab: a web-based user interface for Project Jupyter where you can work with documents, text editors, terminals and or Jupyter Notebooks. We are installing this library to tap into the notebook package that is available with this library installation
To start installing:
1️⃣ At Start, access the Anaconda folder > Select Anaconda Prompt (miniconda3)
2️⃣ An Anaconda Prompt window similar to Windows command prompt will open > Navigate to the folder you would like to keep your analytics workspace using the following common command prompt codes:
◽ To backtrack folder location 👇🏻
◽ Change the current drive, to x drive 👇🏻
◽ Navigate to certain folders of interest e.g deeper from Lea folder i.e Lea\folder_x\folder_y 👇🏻
3️⃣ Once navigated to the folder of choice, you can start installing all of the libraries in a single command as follows:
The command above will enable the simultaneous installation of all the essential Python libraries needed by any data scientists.
💀 Should there be any issues during the installation such as uncharacteristically long installation time; 1 hour is stretching it, press Ctrl + c to cancel any pending processes and proceed to retry by installing the library one by one i.e
Once you manage to go through the installation of the basic Python libraries above, you are half way there! With these packages, you are already set to actually make some pretty serious data analysis. The numpy, pandas and matplotlib libraries are the triple threat for exploratory data analysis (EDA) processes and the jupyter lab library provides the documentation sans coding notebook that is shareable and editable among team mates or colleagues.
Since we’re the folks who like to make ourselves miserable with the spatial details of our data, we will climb up another 2 hurdles to creating a geospatial workspace using conda and installing the libraries needed for geospatial EDA.
If you're issues following the steps here, check out the real-time demonstration of the installations at this link 👇🏻
See you guys in part 2 soon!
Survey123 for ArcGIS is perhaps, one of those applications that superficial nerds like me would like; it's easy to configure, kiddie-level degree of customization with 'coding' (for that fragile ego-stroke) and user-friendly template to use.
No app development/coding experience is required to publish a survey form and believe it or not, you can, personalize your survey to not look so meh.
It took me some time to stumble through the procedures of enabling this feature before I understand the 'ArcGIS Online' ecosystem to which this app is chained to.
So how do we do it? And why doesn't it work pronto?
This issue may be due to the fact that when we first start creating our forms, we go through the generic step-by-step procedures that leave little to imagination what was happening. Most of the time, we're too eager to find out how it really work.
When we publish a Survey123 form; be it from the Survey123 website portal or the Survey123 Connect for ArcGIS software, we are actually creating and publishing a folder that contains a hosted feature layer and a form. It is on that hosted feature layer that we add, delete, update or edit data it. From ArcGIS Online, it looks like any feature service that we publish out of ArcGIS Desktop or ArcGIS Pro, save for the special folder it is placed in with a 'Form' file.
To enable any offline function in any hosted feature layer in ArcGIS Online, you will need to enable the 'Sync' feature. So far, in many technical articles that I have gone through to learn how to enable this offline feature always goes back to 'Prepare basemaps for offline use'. It is a tad bit frustrating. But my experience when deal with 'Collector for ArcGIS' gave me the sense of epiphany when it comes to Survey123. So when you have prepared your Survey123 form for offline usage and it still doesn't work...do not be alarmed and let's see how to rectify the issue.
1. Locate your survey's hosted feature layer
At your ArcGIS Online home page, click 'Content' at the main tab. We're going to go directly to your hosted feature layer that was generated for your survey when you published.
Locate your survey folder. Click it open
In the survey folder, navigate to the survey's hosted feature layer and click 'Options' button; the triple ellipses icon
At at the dropdown, click 'View item details'. Please refer to the screenshot below:
2. Change the hosted feature layer settings
At the item details page, navigate to the 'Settings' button at the main header and click it. This will prompt open the settings page for the feature layer. Refer to the screenshot below:
At the 'Settings' page, there are two tabs at the subheader; 'General' and 'Feature layer (hosted)'. Click 'Feature layer (hosted)' to configure its settings.
At the 'Feature layer (hosted)' option, locate the 'Editing' section. Here, check the 'Enable sync' option. This is the option that will enable offline data editing. Please refer to the following screenshot:
Don't forget to click 'Save'
With this, your hosted feature layer which serves as the data model is enabled for synchronization. Synchronization helps to sync back any changes you've made when you're out on the field collecting data; editing, adding, deleting or update...depending on what feature editing you've configured.
It's pretty easy once you get the hang of it and just bear in mind that the data hierarchy in the ArcGIS Online universe are as follows:
Feature layer (hosted) > Web map > Web application
Once you get that out of the way, go crazy with your data collection without any worries!
I am a reckless uninspired person. I call myself a map-maker but I don't really get to make maps for reasons that I don't think I should venture outside of my requesters' requests. But mostly, I am compelled to get it right and I feel good if I can deliver what they need. The thing is, I no longer get spontaneously inspired to make maps anymore. Just as the rules become clearer the more you read books on cartography, fear just crop themselves up like 'Plant vs Zombies' 🌱 in PlayStation.
So, I am scared that I'm beginning to wear off my excitement about making map; really making them and not just knowing how to make them.
What sort of idea is great? I mean, what should I focus on trying to make? There are so many data out there that what I will attempt may be missing the train or just pale in comparison to other incredible work. I don't really mind it but I'm not that young to not understand self-esteem does ease the thinking process.
Can't say much, I mean...30 Days of Map Challenge hasn't been all that well with me. I should've prepared something before the event event started. I quit after the 3rd challenge cause I overthink and get panic attacks every time I feel I'm doing stuff half-ass.
Despite all that, I am lucky to have aggressively supportive siblings. They just can't seem to stop the tough love and always kicking me to just barf something out.
'It's the process that matters!'
When did I start forgetting how wonderful the process, huh?
The year 2021 is looming over us and I am dying to have some sort of control over what I could be doing for the next 365 days. While 2020 had been a year of 'character building', I discover alot of things about everything around me and myself. For starters, I am an avid planner; surprisingly. But it does not mean that I follow through with them. See what I did right there? I am admitting the truth behind self-study and lifetime of learning.
With alot of things I have planned to breathe new life to my own progress and time management, I went hunting for some interesting stuff in the internet for inspiration and try-outs. And guess what? I found one and I think most people may have been using this already in full swing because the review is 5 ⭐!
🌑🌒🌓🌔🌕🌖🌗🌘🌑
Taskade is simply a project/team management tool. Ah ah ah...before you write me off, hear me out. Taskade is aimed to help teams to plan, organize or manage their tasks and prioritize output for decision-making. It is simply an interactive planner sans organizer sans dashboard that sees where you're at with your work, what you've managed to get done and communicate tasks among people in your team; IF you have a whole team working on some sort of project. Hence, the chat capability that is implemented in this tool.
At my job, I work in a team of only 2 people; me and another colleague, and we're the regional programme unit which is apart of the bigger unit of team mates spread elsewhere in other regions. So, just because your unit is small, it doesn't mean that your task load complements your pint-sized manpower. So, I've been looking for platforms that could help me organize our productivity and ensure high-quality output. Just because technology is more advanced, it doesn'e mean there isn't any learning curve, right? So I tried just about anything under the sun for project/team management; Asana, Slack, Discord, the pre-existing Google..., but none of them could nail all shortcomings precisely; due dates, assignment of tasks, progress, sub-tasks, interactive commenting, multiplatform sync, brainstorming etc. Channels in Slack gives me headache -- same with Discord, and Telegram channels is too 'static' and 'one-way street' for me to view everything.
I found Taskade after trying to find a complementary 'Forest: Focus' extension at the Google Chrome extensions marketplace. There are plenty of interesting high-quality extensions as of late and I am pleasantyl surprised because earlier this year, most of them were quite 'beta' in their functionality. I saw a 'Bullet Journal' extension that someone raved about and another individual commented: 'Isn't this Taskade?'. The curious cat I am, I googled it and was not disappointed. What are the main keywords that hooked me?:
FREE
Google-integrated
Remote work environment advocacy
Multi-platform
What features do Taskade actually have? ✨
Given that it is an All-in-One Collaboration tool, it is understandable if the GUI is pleasing on the eyes. I do understand that first-impression is everything; color, packaging, fore-front information and visual, but it was really the functionality that delivers me to salvation. If you're an active member of Dev.to, then you'll catch feels with this theme that Taskade delivers. Key features in Taskade that you should try out:
Task list
Collaborators invitation feature (no organizational handle required)
Chat feature (with a call feature!)
Workspace feature (nothing new but...I'll get back to this later)
5 interchangeable neural-forest task list templates; List, Board, Action, Mindmap and Org Chart -- seamless with no error.
The capability to utilize this very platform as a presentation or exported into PDF task list printout.
Safe to say, Taskade buried me alive with the curation of beautiful images for the background; again...not relevant but needed to be said.
The Live Demosandbo lets you try it out for yourself although, at first glance, you may be wondering what on earth you are looking at. But it won't take long before you discover that it is quite intuitive.
Did I mention you can download and access it from just about ANYWHERE? Laptops, browser extensions and even smartphone apps. I'm not kidding when I said Taskade is multiplatform; they work on Windows, Mac, Android, iOS and Linux. Currently, I am testing it out using the Chrome extension and installed the app in my Android phone. It works like I expect it to so far.
What is the difference between the FREE and PAID version? 💰💰💰
As I just mentioned, you can sign-up for it for free and use it for life...for free. The priced version is seemingly there to accommodate the file size per upload you require; as of now. For free plan, you can upload 5MB file per upload while the paid version increases the size to 50MB per upload. Both versions offer:
Unlimited storage
Unlimited tasks entry
Unlimited project creation
Unlimited collaborators addition
The development team is currently adding more functionalities such as Project Activity Tracking, Integration to Dropbox, Google Drive and One Drive as well as Email Integration -- available for free.
Although it is mentioned that the free version of Taskade includes unlimited tasks, collaborators and all essential features, it was also mentioned that you will need to upgrade if you exceed the workspace limits which doesn't actually have any entailing elaborations which I will try to dig soon enough. But safe to say that if you are a single person using this tool, you are considered a team of 'one' where your shared projects in workspace to your 'editors' are still considered free. Only workspace the addition of workspace members are billed. This may imply that there are certain limits to how many individuals you can add into your workspace before you are required to upgrade. So far, visually, I see that the limit may be 2 people that makes up to 3 people per workspace (including yourself). You can find some details to pricing and FAQs here:
Taskade | Simple Pricing
Personally, I don't think USD5 is a hard bargain if you're self-employed and work with external parties collaboratively. If you're apart of an organization, feel free to ask for demo from them. Discount is possible if you're from a nonprofit or educational institution.
How I use Taskade? ☕
Well, given that it was free to sign-up, I tried it out straight away and I'm happy to report that I successfully managed to use it without having to google nor view any how-tos. That is a good thing! In fact, I am quite elated with just how easy it is to use this tool that I have used my personal email to help centralize and manage my work and personal work side-by-side. If you prefer some satellite view of your progress and all the task you need to complete to clear off certain objective, this is not a bad organization.
So I created 2 workspace: one for work and one for my personal tasks. Then I just collate all my tasks into monthly projects.
My personal tasks involve me updating my study progress and curating stuff I like online into my Tumblr blog.
Create studyblr workspace
Create new project in the studyblr workspace to organize and brainstorm Tumblr contents I plan to create and post: Tumblr: 2021/01.
Utilize the Mindmap template from all the options of templates shared and start creating the and organizing the content I want and tasks I need to execute to develop them.
Et voila! There all there is to it! It is easy peasy and you can start adding due dates as reminders and links as resources as well as hashtags for filtering in future. Check out some drafting I did so far in the screenshots below!
For more updates, check out their Updates page that fully utilizes Taskade to share all the updates straight from December 2017 till present and the chat function is there available for you to ask the Taskade team about the feature updates directly. Now that's awesome cause you know something's good if the one who makes them, actually uses them.😎😎😎
Don't break the chain peeps! Reblog cause I'm looking for inspiration for my next masterpiece! 🙇🙇🙇
I’ve just unfollowed a bunch of inactive blogs, now that I follow ONLY 54 blogs??? pls reblog/like so I can have an active dashboard and new friends hehehe