Over a year ago, Niantic (the developers behind Pokémon GO) announced a game called Wizards Unite. In their announcement, they celebrated the state of their AR technology, including how successful Pokémon GO already was at that point (November 2017). It meant that they had the ability to expand on that and create an experience for the Harry Potter franchise:
We’re incredibly excited to announce this next step in the evolution of AR mobile entertainment. With Harry Potter: Wizards Unite, players that have been dreaming of becoming real life Wizards will finally get the chance to experience J.K. Rowling’s Wizarding World. Players will learn spells, explore their real world neighborhoods and cities to discover & fight legendary beasts and team up with others to take down powerful enemies. We’re thrilled to partner with Warner Bros. Interactive Entertainment, and WB Games San Francisco’s development team to bring this magical and beloved series to life in a brand new way. Harry Potter: Wizards Unite will leverage the full stack of the Niantic Platform while also providing an opportunity to pioneer all new technology and gameplay mechanics.
It sounded really impressive, but I didn’t expect it to take this long. Because we’ve only just been treated to a trailer, and it wont be released until 2019.
The story sounds reasonably interesting:
The Ministry is looking for witches and wizards willing to roll up their sleeves and volunteer to save the wizarding world from the Calamity. As a member of the Statute of Secrecy Task Force (a new task force formed in partnership between the Ministry of Magic and the International Confederation of Wizards) you will hone lightning fast wand reflexes, an ability to sniff out the faintest whiff of magical disorder from afar, and proficiency in advanced casting of multiple spells.
But it will all depend on the implementation of the real world scaling, the AR features, and just how it will be played. The idea of Pokémon works very well in this scenario, and Pokémon GO has only proved that. But I’m not sure how you could get a game based around the Harry Potter franchise, that is constantly pulling you in to do more, without it being a fixed storyline.
I’m a big Harry Potter fan, so I’m going to play it no matter what. But I hope it’s something long lasting.
You can find out more information, and also sign up for updates on the Harry Potter: Wizards Unite website.
We take photos to capture moments. A great photo can fill in the blanks of our memory, instantly recalling forgotten details and conversations otherwise lost to time. But has looking at a photo ever raised more questions than it provided answers? Let’s look at reimagining the Photos app to better tell the stories of your memories.
“Why was I wearing a winter coat in May?” “What were we listening to on that road trip?” “Why were you downtown so early on a Sunday morning?” Where our memory fails, technology can assist. Beyond being a great camera, modern iPhones store and can tap into rich libraries of data relevant to our lives. By interpreting this information through the intelligence of iOS, the Photos app could be expanded in two key ways to help weave our memories into vivid stories.
There’s a ton of great suggestions on the types of information that can be extrapolated from photos, and it’s certainly interesting to see how it would look in the UI.
I really like this concept, and it’s certainly something I think Apple could pull off. Even without collecting personal information en mass, as it can be done on the device. And anything that needs to be referenced from somewhere else can be requested completely anonymously, such as the weather or information about a location.
Juli Clover, writing for MacRumors:
At its annual Adobe Max conference, Adobe announced plans to bring a complete version of Photoshop to the iPad in 2019.
Photoshop CC for iPad will feature a revamped interface designed specifically for a touch experience, but it will bring the power and functionality people are accustomed to on the desktop.
I’m interested to see how Photoshop will actually work on the iPad. They do say it’s the full version, but will it include the automation that was available on the desktop, and how will it fit into the iOS environment? For example, will it have support for Siri Shortcuts, support for a Photo Editing Extension, and how are the toolbars going to be translated into iOS UI?
Then there’s the price. Affinity Designer and Affinity Photo are both priced at £19.99, Pixelmator is only £1.99, and there are apps like Polarr that are free. I’m guessing they’ll extend their Creative Cloud subscription to include the iOS version, but I think a cheaper solution is needed to be competitive on this platform. £9.98 per month is their cheapest individual plan in the UK, and that will deter a lot of people.
Another recent announcement of theirs, Project Gemini, is something that’s probably more suited to my uses. As that is a lot simpler, and focuses on drawing and illustrating.
I’ve started watching more television series recently, and more importantly, a goal to rewatch/watch all the episodes of Pokémon (Best not to go in to that one too much, I’ve been obsessed since I was a child).
Naturally, I decided to find an app that can help me remember where I am with everything, as my memory has never been excellent. After a little search I came up with a few suitable apps, including iShows and SeenIt, but I settled on Hobi pretty quickly.
It integrates with Trakt, which most of them do, so I was grateful to see some of my previously watched tv shows appear in the app immediately. It can also notify you when new episodes air, release dates are announced, or to notify you of season premieres.
It’s topped off with a pretty great design. I like how the most prominent parts of the UI are the cover art, and the name of each tv show. It looks a bit like Things for iOS, and I’m a big fan of the clean aesthetic right now. Especially when it’s joined with slightly larger and heavier fonts.
You can download Hobi for free on the App Store. But they also offer a premium subscription which enables advanced sorting options, no show limits, and also no limits on the number of devices.
Yesterday was iOS 12 release day, and it was a pretty good day overall, so I thought I’d share some of my personal highlights.
I released version 1.3 of Text Case which includes support for Siri Shortcuts. It was covered by John Voorhees at MacStories, which was great! And it was met with a good response from quite a lot of people. It’s by far my favourite app I’ve developed, and it’s been super fun seeing where I can integrate the functionality of Text Case throughout the system.
Then, of course, there’s the annual event for nearly everyone in the Apple community, Federico Viticci’s iOS review! He puts so much work into it, and you can always see that when you read it. This year it’s packed with some great Siri Shortcuts information and even more if you’re a Club MacStories member (which I am!).
There was also a bunch of other apps that released updates with support for Siri Shortcuts, this was great to see from a developer and user perspective. My favourites so far are PCalc, Bear, Ulysses, Citymapper, Overcast, CARROT Weather, and Things. And now that password managers can integrate properly into iOS, I’m also going to look at using 1Password, LastPass, or even one of the many others that have been updated
The best part of the day was probably Twitter, and while it hasn’t received a lot of praise recently, the Apple community is a major reason why I can’t see myself stopping using it. Everyone was happy about the updates, there was a bunch of conversations about Shortcuts, and generally everyone having a good time! I even made a little snark about Mastodon users. Phil Schiller even shared some unfortunate news about the Squirrel from the 4S Introduction event.
Now I just need to wait until my XS arrives on Friday, so I can try out the parts of iOS that I’ve been missing. Like Animoji, Memoji, Portrait Lighting, the new Depth Control, and even FaceID!
…if you want the best Google software, iOS is really the place to be.
That sounds crazy, and maybe for some people it is, but as someone who relies heavily on Google’s software in both my personal and professional life, iOS has been a great platform for getting everything done that I need to do. Not only that, but a shocking amount of Google apps are updated first on iOS or are totally exclusive to iOS for months before going to Android. And with new apps like Files and updates to Siri intents, Google’s apps can interact more closely with iOS than they could in earlier versions of iOS.
I can’t say I’m well versed in the Android ecosystem, but I am aware of it. I pay attention to Google I/O announcements, and of course, there’s an Android developer at work so I have at least some perspective.
The only, or at least the biggest issue I can determine, is the obvious levels of fragmentation. This used to be the argument of app design, and quality, where iPhones used to be just the one size, and Android already had loads of variety.
The fragmentation I think causes these problems is the multiple Android vendors and mobile networks, that introduce needless bottlenecks to the whole platform. Whether it’s a small update that will get ignored by certain manufacturers or a major release which will take extra time for a company like Samsung (just picking one at random) to add their software on top, before shipping it to consumers. I just don’t think the wide varieties of Android phones combine to make a stable ecosystem.
That’s a whole lot different with iOS though because there’s less device variety, a higher percentage of users are on the latest version of the OS, and the App Store is a widely known success. I think this is why Google do so well. Because they can leave the foundation work to Apple, and that leaves them with just the software. And I can admit they can make pretty good software.
Theo Strauss, writing about Lyft’s new implementation of the search bar, and why its best placed at the bottom:
Although we don’t think about it too often, a search bar all the way at the top of the screen is hard to reach. Especially for users who have smaller hands or users who have less flexible hands, reaching up is annoying, mostly because the top of the screen is far away from where their fingers sit.
If you visualize most apps, the main content is in the middle or lower-mid area. Tab bars for navigation, posts on social media, and keyboards on messaging platforms are all examples of important pieces of experiences sitting in a more reachable position.
I feel exactly the same. The ability to search within an app, or just accessing the main navigational controls of an app, should be the most accessible parts.
In a world where we use tools such as a mouse, or laptop trackpad to direct a cursor around a screen, a classic vertical layout where all navigation is at the top, and the content filling the rest of the space, is probably fine.
However nowadays we interact with content on our displays directly, so it needs to be designed with a human hand in mind, not a cursor.
You can already see Apple pushing developers/designers towards this bottom-up approach, as they’ve added the “pull up” drawer-like component that contains a search bar and results, into the Maps app. This is the approach I feel needs to be standardised going forward, but this isn’t the only approach. As the Music app also follows this idea of having controls at the bottom, with the now playing indicator being there.
I do see this becoming a trend very soon, and I suspect that in a few months quite a lot of apps will be using a sheet similar to the one in Apple Maps. The only drawback is that Apple don’t provide a standard implementation of this bottom sheet, and instead developers either have to implement this manually, or adopt a library from other third-party developers.
I’ve been experimenting with it at work, and I’ve found one library to be very useful, and that is PullUpController by Mario Iannotta. It provides you with a simple one liner to add any view to act as the bottom sheet, and also manages the sticky points, management of inner scrolling views and content, and you can also extend it to your wishes.
Hopefully Apple can share their implementation and more developers can make use of this new interface style.
With all the nostalgia of the early App Store and iOS SDK days, Frederik Riedel tweeted about his experience developing iRedstone:
When I created iRedstone, I had no idea what object oriented programming was. I had no idea what a ViewController was. It was all in one UIView. But it worked. You don’t have to be an engineer to create apps.
— frederik 🧗🏼♂️ (@frederikRiedel) July 10, 2018
After he tweeted that, other developers started quoting it, and sharing their experiences. Frederik has compiled a great collection of them over on his blog.
It hasn’t been long since the release of Text Case, but I’ve already had some great suggestions, so I decided to add them in!
So here it goes.
Five extra formats: – URL Decoded – Capitalise All Words – Camel Case – Snake Case – Hashtags
One format has been “fixed”, and that is Capitalise. It now does the obvious and also capitalises the first letter after a period.
You can now choose which formats you want to enable, by navigating to the Settings page, and flipping the switches. This will obviously allow for a more customised interface, as I imagine some people won’t want all 12 formats to show if there aren’t needed.
I still have two things I want to work on. One is the ability for the action extension to be able to replace the original selected text with the new converted value. The other is a pretty great idea that I can’t share until I figure out how exactly I’m going to implement it. But it will be an advanced feature.
I’d also like to say thank you to everyone that has already downloaded Text Case, and I plan to keep adding useful updates!
If you haven’t already, you can download Text Case on the App Store.
Very unsurprisingly, iOS 12 brings better notifications support. There’s not too many changes, but they are certainly most welcome.
The big one is grouped notifications. It’s probably the notification feature I’ve been wanting the most, and Android constantly used to make me jealous with it.
I’ve not quite worked out the requirements for them to group together, because I’ve seen iMessage conversations automatically group, but other apps group after 4 or so individual notifications.
There are three options for grouping your notifications, automatic, by app, and of course, none. The interesting one is automatic grouping, because apps can actually help the OS work out what notifications should be grouped together, by providing different identifiers. I’m not going too too much in the technical side, but you’ll notice that Messages.app will group messages from conversations together, but each of these are separate on your lock screen.
There’s going to be more to find out about grouping though, as I’m sure there are different quantifiers which will change the way the system handles them.
In regard to the actions you can take on notifications, you now get to control how any apps notifications are configured, right on the lock screen. All you need to do is swipe right-to-left, and tap Manage. Then you’ll find three different options (depending on the current settings):
- Deliver Quietly/Prominently (The opposite of what is currently set)
- Turn Off
- Settings (This takes you straight to the app’s notification settings, so you can fine tune all of the settings.
These are all welcome options, and I particularly like the deliver quietly, for apps that I want information from, but don’t care that much about it. The options have always been available for these settings, but they’ve always been a hassle to get to. And the Quietly/Prominent options make it simple and clear so everybody can understand.
Do Not Disturb
Something else related to notifications, is the Do Not Disturb, which also received a few improvements.
It’s actually been split into two different levels of not disturbing you, and that’s differentiated by the Bedtime Mode option. So normally Do Not Disturb just means not to notify you, but if you turn on Bedtime Mode, it will keep your screen completely free of distractions until the time period is over, or you turn it off. Something that makes a lot of sense.
It also benefits from “Siri” (the intelligence in your device, rather than the voice), because it’s something else it can suggest for you. It can be triggered by a time, location, or event. I’ve already seen this a few times, once where it suggested I turned it on, but only until an event in my calendar was over. Pretty clever.
These are some fantastic improvements to how notifications work in iOS, but I still would like one more thing from Android. And that is to set specific notification categories. You can do this already as an app, in that it’s the way iOS can group notifications. But Android users can select to mute specific categories from apps, making it an even more personalised system. However, that’s very much want, and not a need.
If you read my recent piece on refining how I use my devices to maximise their value, you’ll know that the one major thing I wanted to help this was more insight into how I used them. Screen Time is that thing.
In the most Apple way, the data is accompanied with pretty graphs, and there’s quite a bit of information available. You see the apps that have taken up your screen, how many notifications you receive from each app, how often you pick up the phone, and even what your longest session was.
I haven’t received one yet, of course, but Screen Time will also give you a weekly activity summary. Which would be a good time to reflect on how the week went, and then take measures to ensure you use your devices in the ideal way.
If you want to be more strict with yourself, there’s some settings you can play around with, to ensure you know when to stop looking at your phone.
Downtime is a period of time where you will not be able to open any applications that are not in your Allowed list, ideal for setting a strict bedtime. Then you have App Limits, where you set an amount of time that you’re allowed to use on a specific app, or category, and they can even be specific for each day of the week. Finally, there’s a bunch more restrictions you put on yourself, but these apply more to parents who want to stop their children from accessing certain content, or just ensure they don’t just sit on Minecraft all day (what I used to do).
I’m super happy with this feature, and I can’t wait to see my first weekly report. Although I imagine this weeks will be completely skewed, as I’m using my device more than usual to try and find any cool new things in the beta.
Kicking off my collection of writing on WWDC 2018, I’m going to talk about Siri Shortcuts, and the Shortcuts app. As soon as I saw it, I knew it would be one of my favourites from the whole event.
The announcement was received by most people as being “now we know what the Workflow team have been up to”, I’m not complaining, I also posted the same thing. It is probably the best way the Workflow acquisition could have gone, because now it’s completely tied into the OS. It may have a different name, but it will always be Workflow.
The features announced was really how the OS interacted with the shortcuts, and how Siri is more intelligent because of it. Not the voice Siri, but the computational Siri that can understand you, and suggest things.
It will, of course, require developers to open up about different user actions in their apps. Which will allow Siri to analyse their usage, suggest them later on, and also maybe for users to build with in the Shortcuts app.
There were some intriguing demos for the suggested actions, which is not something I really say, because usually they are based around unrealistic or ideal situations. But because Siri is in your phone? I’m not sure how to explain that, but it knows about you, what you’re like, and also the environment around you. Which is why it can suggest you turn on Do Not Disturb when you go to the cinema.
They also showed an example of a regularly occurring event, such as buying a coffee in the morning. Maybe not everyone buys a coffee from an app on their phone every day, but I use the Starbucks app every time I go. And that could easily be at least 3 times a week. So if it learned (or just used Maps) the location of Starbucks, recognised it was associated with that action, that would be very helpful! It’s certainly something I feel would be capable, and it’s not a usual Siri feature that’s nice to think about, but never use.
It does get more advance though, and that’s with the Shortcuts (Workflow) app. I conceptualise it by being similar to Scenes in HomeKit, where you could say a phrase such as “Good morning”, and then Siri can perform a bunch of tasks to set you up for the day. Maybe it sort of encompasses the automation of HomeKit?
I’ve already been playing around on the iOS 12 beta, and while I’ve already been suggested some actions, like enabling a alarm, messaging my girlfriend, and even adding a new to-do in Things, we don’t have the Shortcuts app yet. That will come in a later update via the App Store. So I will definitely have to write more about that in the future. But from the keynote, it looks like they’ve added the Apple-style to Workflow, which will definitely make it feel easier to use for general users.
One of my questions though, is how well suited is this to a general user? I will be very keen to see if it’s a widely adopted feature, and even if the Shortcuts app with custom actions might not be, I see the Siri suggestions being a bit hit.
After watching the Keynote, I was thoroughly impressed. While there still isn’t a dark mode for iOS, I can imagine it coming soon. And there are a lot of cool things that were announced.
While watching the event, I took a note of the top 4 for each OS, excluding tvOS, because who cares?
So here they are:
- Siri Shortcuts
- Screen Time
- Automatic Workout Detection
- Walkie Talkie
- Interactive Notifications
- Dark Mode
- Dynamic Desktop
- Mac App Store
I plan on doing some writing about the new features, but in more of an opinionated way, rather than a simple informative guide. You’ll find these with the WWDC 18 tag.
With iOS 12’s imminent announcement, I thought I’d prepare myself for a new way of using my devices.
For months now, I’ve been trying to refine my use of my devices, apps, and services that I use. But I think a different approach is needed, and I hope that future OS updates will help me along the way.
The method I’ve been using for a while is quite a harsh one, where I disabled notifications, and everything associated with them, on nearly all applications. Along with getting rid of some apps/services that I don’t think provide any value.
But while I think this has been a step in the right direction, I don’t think it’s a particularly accurate way to achieve my goal of adapting my devices to my needs, and for it to provide me with the most value as possible.
That’s why I’ve now done a complete reversal and turned on all the notifications, and possible distractions on my iPhone. In the short term, I’m hoping this will let me find out where I don’t need to be spending my time and also see if there is any value to them. I mean, I know notifications can be valuable, but I want the right balance. And by turning them all off, I’m potentially missing out.
So tonight, I’ve already gone through a few apps to disable types of notifications, and in some cases, just deleted the app entirely. For example, I have an app for a restaurant I go to maybe once every two months, but they send at least one offer notification every single day.
What I’m majorly hoping for in the next iOS update, are pretty minor things. With the ability to group notifications having the highest priority. I can’t even bear thinking about the types of apps that would benefit from this, because it’s probably all of them. I also think there can be improvements made to the way notifications are visualised. Because even grouped, it’s still just a list.
Then there’s priority, not all bits of information are equally useful. And if they are, you might not need to know about it right now. Things like iMessages are more important than likes on an Instagram post, and work emails are certainly not relevant out of work hours, or maybe even a work location. So there’s a lot of work that can be done here, involving sorting, filtering, and queueing/snoozing.
If all of these issues are “resolved”, then I think the way devices are experience, and even used, will change quite a lot.
There’s also one more tool that would be able to help focus your device usage on a bigger scale, and that would be a way to monitor/visualise your usage, or habits, system-wide. Of course, you can kind of track this by using the battery analytics that tells you the time on screen for apps, but I want it better, and more in my face. Because more insight can only be better.
This is, of course, a long-term goal, and maybe more of a process. But I plan to write about my journey of focusing my usage of devices, and in general, refining my life to maximise value.
I have a few more ideas that I want to try soon, so you’ll find these here only blog as well.
Build beautiful, usable products using Material Components for iOS. – material.io
Or how about you follow the Human Interface Guidelines by Apple, which is what all iOS apps should be using.
Say you write an iOS app, and now you want to write the Mac version.
Assuming there’s a data model, maybe a database, some networking code, that kind of thing, then you can use that exact same code in your Mac app, quite likely without any changes whatsoever.
I agree with Brent here. I’ve never really understood the argument that AppKit is that difficult to understand, so that’s why people don’t port native apps over. Surely the underlying logic of the app is the hard part, and linking the functionality to the interface is the easier part?
I would say I’m more of an iOS developer, simply because I’ve spent more time on it. But I’ve also made a few Mac applications. Sure, a resizing window is a bit more complex than a relatively fixed screen size, and some the interface elements are names slightly differently.
It’s just different, for both sets of people. But not as difficult as it may seem.
I’ve read some reviews about the game already, and it appears that everyone on the internet has something good to say about it. I can only add to that.
The whole game is quite a mix, in that it’s very relaxing, while requiring your complete focus. And also having a potentially very long game time, while offering short term goals.
I find it very easy to be sucked into, and it’s a great game to take your mind away from other things. The achievements and Game Center leaderboards help my own competitiveness, and I really want to move up in the Best Score category. As of the time writing this, I’m ranked 11,157 with 65,065 points. But at the same time, I also enjoy playing it when I short bursts of free time, such as commuting to work, or just between other tasks.
Apart from the gameplay, the game is a really immersive environment. With ambient music, relaxing sounds, and super colourful settings. It’s enjoyable to just look at the thing.
Ryan Christoffel wrote a great piece over at MacStories, about what he wants to see the iPad gain from the Mac:
I made the iPad Pro my primary computer when it first launched in late 2015. The transition pains from Mac to iPad were minimal, and the device has grown even more capable since that time thanks to improvements in iOS. My need for a Mac is now extremely rare.
My desire for a Mac, however, still exists in a few specific use cases. There are things the Mac has to offer that I wish my iPad could replicate.
Now that the modern iPad has many basics of computing covered, here are the things I think it needs to take iPad-as-PC to the next level.
My favourite proposition:
Wouldn’t it be great if an app like Workflow could become more Hazel-like, triggering workflows automatically in the background based on pre-set rules?
They’re great ideas, and I hope Apple adopt at least a few of them.
As you may or may not know, I’ve been building my own iOS app for Manton Reece’s Micro.blog.
A short description of Micro.blog, if you aren’t already familiar:
A new social network and publishing platform for independent microblogs, created by Manton Reece.
Development is going well, and I’m nearly ready to announce the first beta version, but I thought I’d write about the current progress, and what you can expect to see in the first beta version. This development log will hopefully become a regular thing as I add more features to the app.
b0.1 – Read Only
The codename for this version is “Read Only”, and that stems from the fact that it will not have any ability to write posts. That is something I want to spend a lot of time getting right, and shouldn’t hold back a beta version from being released.
Right now, there are 5 main sections in the app:
The first four are pretty much the same, except they present different lists of posts. But they are what you’d imagine.
On each post in these lists, at the minute you see the name and username of the author, the posts content (of course), and the date. Each post also has a favourite/unfavourite button in the top-right corner. Swiping right to left on these cells, will show you the full conversation relevant to this post.
I currently also do some basic link detection in posts, and if there’s a @ mention with a link to their Micro.blog profile, it will navigate to their profile page. Anything else at the minute will launch inside a Safari View inside the app.
In the profile page, for yourself, or other users, you currently only see the name, username, photo, and also the number of people you are following. You cannot see how many followers you have in any case. Tapping the following will show a list of all of these users.
The app currently supports both methods of authentication, app token, and also by requesting an email that contains a link to open the app.
I started on a side menu as well, which at the minute simply shows the version number. But this will be expanded heavily in the future.
Of course one thing I need to add, is the ability to log out! It will be placed in the side menu.
I also want to expand the profile pages, by adding the bio, and also a link to their website. And also, features surrounding the user that I want to add, is the ability to tap on a users image to open their profile, and also the ability to follow and unfollow a user.
Finally, I need to make some icons for the overall app (most likely a quick draft for beta purposes), the different tabs, and also one for the menu.
Apple today, launched two more videos focussed on the iPad Pro to their YouTube channel.
With iPad Pro + iOS 11, you can use augmented reality to literally transform the world around you. Your next computer might not be a computer.
With iPad Pro + iOS 11, you can use Apple Pencil to create multimedia notes. Draw, type, or drag and drop your favorite photos from Files. Your next computer might not be a computer.
I’m really enjoying their latest series of iOS 11 videos. It’s not a simple, an iPad is better than a Mac argument. Instead, they tend to focus on a younger user that has no concept of ”a computer”, but treat an iPad as the device.
It’s starting to become even more apparent, that younger generations are the ones that are truly adapting to new technology. Mainly because they haven’t got the burden of really knowing what it was like before these new devices, such as the iPad Pro.