When I saw the video, I thought I’d give it a watch, since I’ve found other Rich Roll episodes to be pretty interesting. Well, I was right, I found it to be a fascinating interview. One full of lots of intriguing little tidbits, and things I didn’t know before, or at least not fully understood.
I’d encourage you to watch the interview, I also decided to write about three things that I took from the video that I found to be interesting.
We tend to think that dopamine is a chemical that is released whenever something big happens, and that it’s a kind of pleasure/reward response. While that is true, dopamine is also a chemical that rewards behaviour so that we are encouraged to repeat it.
The lack of dopamine can also cause you to quit an activity or behaviour. Which is why mentally celebrating small milestones is very beneficial. As this makes your brain release small hits of dopamine, which then in turn pushes back your desire to quit, and reassures you that you are on the right path.
He also touched on the reliance of external dopamine triggers, and how that they can negatively affect you when they disappear. And when you perform an action that you have become accustomed to receive external gratification for, and therefore triggering a dopamine release, if that does not happen, then your likeliness to quit increases. As that behaviour does not trigger the same reward as it used to, so your brain will treat it as it has a lower value. This behaviour somewhat ties into addiction, which he explained in the latter parts of the video.
Mental Focus Follows Visual Focus
Our eyes are part of our central nervous system, and can be seen as being part of the brain. One chemical that is apparently key to visual focus, is adrenaline, as it causes your pupils to dilate, and allows you to focus better on one thing visually. Your body releases adrenaline as a response to stress, so you can better deal with the situation at hand.
He also said that this level of focus after a release of adrenaline is most likely what some people nowadays are referring to when they mention some kind of “flow state”. And once you are in this state, it will trigger your brain into cognitive focus.
On the other hand, when you are in a non-stressed state, your brain allows for a more panoramic view, which in turn allows for more awareness of your surroundings.
Time perception is also apparently linked to our level of focus on our physical space, with the more focused we are, resulting in a perception that more things are happening in a shorter period of time. And conversely, when your focus is more dilated, it appears as if you have more time, and everything is spaced apart.
I found it interesting that he said this was not the same as time itself going faster or slower, just the rate of which things happen appears to change.
How to Decompress
One thing that maybe most of us are slightly aware of is that taking breaks can allow for decompression, and help recover our energy levels. But it’s also important what we do on those breaks that matter.
I’m sure a lot of us are aware of context-switching, and how it can take time to adjust our mind to different contexts. This is also relevant when taking a break too, as if you want to decompress, switching to another activity where you’re in a focussed state will only make it harder to refocus back on your main activity.
Instead, it’s better to take regular breaks where you are not partaking in any activity that requires any substantial focus, and instead by having a more panoramic view of your surroundings.
Then, just as I mentioned above, your mental focus will follow your visual focus, and your body will be more able to recover energy.
It also means that you require less energy to refocus your mind when going back to what you were doing.
I have a few life goals written down, where I think if I would achieve them, would signify a very satisfying life for myself.
One I would like to share today, is that I would like to not need to set an alarm on a day-to-day basis. I want to be able to wake up naturally, and go about my day as I see fit.
The biggest and most obvious hinderance to me achieving this goal is my day job. And for most people attempting to achieve something similar, where they have more control over their day, this problem would probably be the same.
The essence of the goal is mostly based on having control over my own timetable, but at the same time, I don’t want this to mean that I can’t work. I’m happy working, but I want to do it on my own time. Maybe that’s expecting too much, but I have a feeling that many people would like to control their day a little more.
This actually came back into my head recently with the news that the U.S. Senate unanimously passed a bill to make Daylight Saving Time permanent, and also based on commentary from LD Stephens, and Mike Rockwell.
But the idea that triggered it the most was from LD Stephens, where he mentioned how it would more negatively affect the northern states, with darker mornings in the winter months. While I won’t pretend to understand the geography of the U.S. or the opinions of its citizens, it is interesting to me that even inside a time zone, not everyone experiences the same day. I haven’t quite figured out if I think this is a problem, or if it’s just a situation that we have to get on with. But intriguing to me nonetheless.
Where this ties in to my idea about controlling your day, is that because timezones are experienced differently based on where you are within that timezone. So while it may say 8 am on everyone’s clocks, maybe in one place the sun has risen, and in another it hasn’t. That’s probably not a big deal to most. But when you add the context of a normal day job starting at roughly the same time, then it has a more visible effect.
I have many questions in my head regarding why we need to start work at a certain time, and why working hours aren’t adjusted on a hyper-local basis. But I guess the answer is mostly that it’s simply easier this way.
When I think about Daylight Savings Time, it does at least attempt to counteract the varying sunset/sunrise times over the year. But I don’t think the “solution” to this is in the concept of time itself, but how we base our lives on it. For example, if you normally start work at 8 am, but now the sun doesn’t rise until 8:45 am, why can’t you just start work at 9 am instead? Why do we adjust time to compensate? I’m guessing because “it’s easier”.
I’m not sure that this would work for everyone, but I think a more flexible approach to working hours would be widely accepted.
Before the pandemic, I started coming into work earlier, starting at 7:30 instead of 8:30. However, while this may seem flexible, I’m pretty sure there would have been a bit of friction if I asked to start at 7:45 instead. It’s also not flexible, as in I have had to agree this new time, I couldn’t have just arrived one day at 7:30.
What, I think, would be a good solution for most people is if your starting time wasn’t fixed. But maybe there’s an hour window, and then your working hours start from whenever you arrived at work. Perhaps one day you woke up 15 minutes late, and therefore arrived “late” to the office. Why treat it as a problem of being late? Why is there not a general acceptance of the fact that you arrived 15 minutes later, and that you will just leave 15 minutes later as a result?
Personally, I think the best solution for people and companies (where the job allows), is if your entire working hours were flexible. For example, you could be contracted to work 8 hours a day, but you are free to fulfil those 8 hours between the hours of 7 am and 9 pm. Maybe one day you have plans later on in the day, so you choose to start at 7 am, and have a short lunch. But another day, you want to go have brunch with friends, so you could start later, or perhaps you just take a long break?
To go even further on this idea, what if the entire week was flexible? What if you could fit in 40 hours of work as you see fit throughout the week?
I think this idea of control, and having your working hours work around you, instead of everyone conforming to the same schedule, would result in a massive feeling of freedom.
While I have personally been working throughout the entire pandemic, I have now done so from home for two full years. I start going back to the office on the week starting 28th March, for 2 days a week. And I imagine a lot more people will be starting to go back to the office as well, if they haven’t done so already. It will certainly be very fascinating to see what cultural changes have happened over the period of the pandemic, specifically regarding commuting into a city to work after working from home for two years. Especially since it appears that people are at least attempting to “go back to normal”.
I guess the question is, “what is normal now?”. A while ago everyone was predicting significant changes to how people work, how they socialise, and their entire priorities and attitudes towards their lifestyles. But surely at some point we’ve got to see this take place? Or has it already happened, and we’ve already accepted it as normal?
Back in early February, I decided to treat myself to a base model 14” MacBook Pro, upgrading from a 16” model from 2019. I had it in my head that I was going to write a big post on my experience of using an M1 Mac compared to an Intel. From both a user and developer perspective, and go over any adjustments that I would have needed to make. Turns out, that won’t be happening, as it’s been a completely seamless experience. This machine is great, and to be honest, it’s surprised me a lot.
Why I Got the Base Model
My last machine had an 2.6 GHz 6-core Intel Core i7 processor, 32 GB memory, 1 TB storage, and a 16” screen. So, this time, I imagined that I’d probably need around the same spec. Except I didn’t really know what to expect from Apple’s M1 chip. I had heard it was better, but then when I was asking on Twitter, and watching reviews on YouTube, it still seemed like developers “needed” the higher spec M1 Max chip, and definitely 32 GB memory or higher.
Based on the feedback I got, and the reviews I watched/read, it seemed like the model for me was the 16” MacBook Pro with the M1 Max chip and 32 GB memory, so it was looking pretty expensive.
But then I remembered how well received the original M1 MacBook Pro/Air machines were, even the models with 8 GB memory. So, I decided to look at it from another perspective. I wanted one of the newer models (14” or 16”), so I started with the base model 14”, and decided that I’d only choose an upgrade if I was certain that I’d need it.
The screen size was easy, I’ve had many 13” models, and the size was always perfect. I went with the 16” last time because I wanted to experiment with a larger screen, but I can’t say it ever added much value for me. So, 14” it was.
Which M1 chip to get was the hard choice. The base chip seemed to be so much more powerful than my current Mac, so it seemed straight-forward. But I still had the recommendations in my head that developers needed the M1 Max chip, or at least a very powerful M1 Pro chip. I then came across the XcodeBenchmark project on GitHub. It’s essentially a very large Xcode project, from which build times can be measured, and various MacBook specs can be compared. My Intel MacBook Pro built the project in 242 seconds, so that was my baseline. When I noticed that the M1 8-core Air took only 128 seconds, I knew that whatever model I got, I’d be getting a substantial upgrade in power. The base M1 Pro chip in the 14” model was even faster at 109 seconds. That was enough reassurance for me, so I decided that I could easily get away with the base M1 Pro chip.
The memory became a lot simpler when I discovered that the Mac I use at my day job only had 16 GB, and I had never encountered an issue. And the storage was never really an issue since I don’t really use much storage on my laptops, I have a load of stuff in iCloud, and a load more on my NAS.
All of that meant that I didn’t actually need any upgrades. Turns out, the base model was all I needed.
Like I mentioned at the beginning, my experience with this machine has only been positive. It’s by far capable of what I’ve been throwing at it, whether that’s been compiling and running iOS/macOS apps or playing games like World of Warcraft. I can’t say that I’ve ever pushed this machine anywhere near its limits. Which both makes be pleased I chose this model, but also confused why I was seeing so many recommendations for various upgrades.
The keyboard was a big surprise for me. I’m not sure if I didn’t know, or I’d just forgotten, but I didn’t realise that the keyboard had been upgraded in the new models. The key travel is much better, the keys are so much more responsive, and they feel really sturdy. I was also expecting to be slightly disappointed by the lack of Touch Bar (I was one of the few fans), but that didn’t happen at all. The downsides of not having a few contextual actions available near the keyboard really don’t outweigh the responsiveness and ease of physical keys.
These are great. I don’t know how you’d go about explaining how good they are, but now I’ve experienced these, I can’t listen to any other laptop speakers again.
I’ve had a few Zoom calls on this machine, and I have seen an upgrade in the camera quality. Nothing to shout about, but definitely an improvement.
From what I’d read and watched on the new M1 Macs, I expected that I’d be dealing with Rosetta a lot. Especially when developing apps. But, I haven’t actually had to deal with it at all.
I’m guessing that some apps I’ve used may be running using Rosetta, but I haven’t noticed anything weird. So, I guess it’s all working as expected.
I’m not sure how believable this is for people, given what I’ve seen online, but I honestly never noticed the notch when I’m using this machine. I only remember that it has a notch when I see someone mention it on Twitter, and then I look up and see a black cutout over the menu bar.
My expectations were that I’d find the notch to be hideous and intrusive, but I was very pleasantly surprised.
Overall, I can only reiterate how great this machine is. It’s by far more than I need, and I think the same will probably apply to most people. The base model is so powerful now, I think that if you aren’t aware of any specific use-cases of yours that absolutely require an upgrade, then the base model is most likely more than sufficient.
I’ve been thinking about my writing recently, but from a different perspective to normal. This time thinking about the longer-term life of some of the things I’ve written. Not the quick link posts, or the product reviews, or anything like that, but more of the longer-form pieces that I’ve really put thought into.
I’m not sure how to best explain them, but if I could choose a few examples that fit into this category, which I’ll be calling “essays”, they would be these:
These all range roughly from the 500 to 1300 word mark. So, it’s not always a certain length. But I would say that what I call an essay, is a piece of writing that you could print out and have it stand on its own, without needing to be supported by the context of my blog.
I’ve gone through my blog and found 25 posts that I feel fit this category, and organised them with the “Essay” tag, which is available to read individually, and has been added to the main navigation bar at the top.
I’m mainly thinking for archival purposes, but the thought of having my writing in book form, especially a physical copy, sounds very appealing.
Maybe those books could be available for others, although I would guess that they would only ever be available digitally. But it’s certainly something I want to look into soon.
The concept that I think would suit my writing best would be a collection of volumes, where perhaps Volume I has X pieces of writing, and then Volume II has the next X pieces of writing, and so on.
I think that’s my first project for 2022. Curating my best writing so far, and making a book. First for myself, but also potentially for others.
Note: These are raw thoughts and not a PhD thesis, and therefore should be treated as such.
In my opinion, social media networks like Twitter, Instagram, and to some extent other microblogging platforms, are underutilised and I think we could gain so much more from using them.
In short, I think that social networks are more enjoyable for everyone when people share everyday life, opinions, ideas, life updates, progress, and real experiences.
I’ve noticed a few things that I think are misconceptions on how we should treat social media:
Every photo needs to be perfect. The background can’t be distracting, you must be in an amazing location, with no mess, and you must also be a professional photographer.
Your thoughts need to fit within the expectations of others.
If you do not provide context, then it is wise to assume the worst possible scenario.
You must treat yourself as a brand.
Sharing a curated feed of your best moments makes you interesting.
While I don’t believe I’m the messiah brought to Earth to fix every problem with social networks, there are a few things that I think we forget when it comes to using them:
We are all real people.
Our lives in most caress are drastically different to what we share online.
Real-life is what other people can relate to.
It’s always seemed fascinating to me how we all seem to understand that social media doesn’t represent real life, but we still get caught up in it. It’s like we’re all wilful subscribers to an alternate reality, where we get triggered by purposefully emotive headlines, opinions that differ from our own, and from people that we do not know.
But imagine if we used social networks to share our real-life experiences. We all have them. We can all see the distinction between what happens in real life and what appears on social media.
I think that is where Micro.blog has felt different to platforms like Twitter for me. In a sense, it feels slower, but at the same time, it feels like you are connecting with real people. Whereas when I use Twitter, most of the time it feels like I’m interacting with an online account rather than the person behind it.
I’ve definitely fallen into the trap before, where I’ve used Twitter as a place to share perfect photos, links to my blog posts, and anything else that can bring external validation. But I think I’m going to try and just use it like a normal person for a while, and see how it goes. Nothing I do is perfect, and it won’t ever become perfect. So the only thing I’d ask is that if you do see me on Twitter, please treat my public posts as coming from a real person, not someone simply out to cause havoc.
I’ve been subscribed to various streaming services in the past such as Apple Music, Spotify, and Rdio. And with some basic maths, I can work out that if I’ve been streaming music for around 10 years (at least), and you put a rough average of £8 as a monthly fee (counting in some small discounts along the way), then that would mean a total of £960 spent on temporary access to music.
I don’t mean to create any hysteria by that figure, as it’s been over a ten year period, and I’ve no doubt enjoyed listening to the music. But I wonder how much it would have cost if I had to purchase every song that I listened to in that period. I currently have around 3,000 songs added to my Apple Music library, and I’ve surely listened to countless other songs as well. So it sounds like I’ve got my money's worth. But I’m still suddenly left with nothing if this service goes away.
It’s certainly an interesting thing to ponder. Because on one hand, music streaming platforms give you access to their vast collections of songs and you can listen to them on practically every computer possible. But on the other, at no point do you own this media, you are merely paying for the privilege to have temporary access to someone else’s music.
When I think about ownership of media, I start to think about the music I’ve streamed, but also the books, audiobooks, tv shows, and movies that I’ve purchased digitally over the years.
And while I theoretically can access this media forever, these purchases exist solely in Apple’s ecosystem. There’s still something that I need to maintain to access my purchases. For without myself owning and using a device that can access the movies I’ve purchased from iTunes, these purchases are worthless. This means that they do not result in ownership, like purchasing a CD, instead what you own is access to this content on platforms that the distributor deems suitable.
One example is buying a movie. If you purchase a physical copy of a movie on DVD, then you are free to watch that DVD on any DVD player, or you can even transfer the movie to your computer into a digital file and have even more freedom. But if for example, you purchase a movie in the iTunes Store, then you have no control over the copy that you purchased. Sure, you can watch it on platforms that have access to your iTunes purchases. But what if for some reason, you lose access to your iTunes account? You can’t export the movie files, you can’t burn them to a disc, and there would be no way for you to access your purchases on any new device either.
Then again, is any of this actually a problem? The reason I purchase movies is to watch them multiple times. I really don’t care about the ownership aspect, I just want the privilege of on-demand access to the content that I like.
It also applies to music. It doesn’t matter whether or not I have control of the raw files, what I care about is being able to listen to my favourite songs whenever I want.
So maybe I don’t need to rush off and start my own personal media collection, as the balance of access to vast collections of content compared to the relative costs are currently working in my favour.
In the end, it comes down to personal preference. As always.
However, after this little thought experiment, instead of realising that streaming services are bad and that I need to “own” everything I consume — which is what I thought would happen — it’s led me to believe that the bigger problem lies right in the middle of streaming content and owning content. In the places where you are required to pay the premium of long-term ownership, but do not have total control over your personal copies.
Because yes, while using streaming services, you do only have temporary access to content. But at least that is reflected in the price that you pay. Just as you would pay more for a physical copy of a movie or album because you are paying for the control and ownership.
Therefore, while I’m not planning on quitting streamin services, I may stop purchasing media from stores such as iTunes, and instead, opt for a physical copy (usually that the same or lower price) which I then control and can store digitally if I so wish.
I can't quite figure out what caused this transition, but recently I've been writing my blog posts on the web directly in the Ghost editor, and I'm rather enjoying it. A while ago, I would have only thought about using a native app, whether I was writing on my Mac, iPhone, or iPad.
But writing in the editor feels to me more like I'm actively writing on my blog. Not just writing something that may be shared later on to my blog. Maybe that makes sense, I'm not so sure. But there definitely feels like a distinction in my head.
I've seen some comments in the past about writing in online editors being bad, with them being slow, not having a good UI compared to native apps, and even having the possibility to lose your progress. But I don't think the web is that bad anymore. Or at least the Ghost editor isn't. If you want to check it out, Matt Birchler made a great video about the Ghost admin interface.
I wonder what the current consensus is on writing directly in a web interface. Is this behaviour still weird? Or am I simply joining everyone else on this one?
Since writing that post and experimenting with various videos, I'm starting to think of these videos as background environments. In that the idea is to immerse yourself in these scenes, in order to remove distractions from the physical environment in which you are actually located. But I've become fascinated in how the experience could be improved.
My current thinking is that the videos should match the real-world environment and to an extent, local time. Because, I don't think a warm room with a crackling fireplace would be as effective on a sunny afternoon, or icy morning, as it would be later in the day when the sun has set. Because in that case, the video changes from being separate to your physical environment, to an extension of your real-world surroundings. But with some added visuals and background noise.
I can't see it being feasable for a product to be created to automate this, but it would be pretty cool to have something where you'd have a constant stream of ASMR room videos, but they'd also adapt to the time of day, seasons, and possibly local weather. For example, a winters day could feature a snowy courtyard in the morning, followed by a library in the early afternoon, then you could watch the sun set over a vista, and relax by a fire in the evening.
One idea that may work is a livestream to rotate through videos, but maybe localised to a timezone/country to align itself with sunrise/sunset times and seasons. I don't know how interested other people would be with that, but I'd certainly watch it.
This may all seem a bit weird, or just me taking something simple, way too far. But this is the kind of stuff that goes through my head.
So Apple has finally announced the first Macs that will run on Apple Silicon. To be specific, there is a new MacBook Air, MacBook Pro 13", and a new Mac Mini. And they all have the new M1.
This is still early on, and there's bound to be more information as time goes on, and as people eventually receive their machines. But, it leaves me with some questions regarding the M1, Apple's idea behind the Mac lineup, and Apple Silicon in general.
Is an M1 always an M1?
With all three new Macs having the M1 chip, I assumed that the only difference in power would be related to how much power it uses, and the thermal capacity of the machine. As in the Mac Mini is plugged in constantly, so it can draw more power. And the MacBook Air doesn't have a fan, so it needs to maintain a lower temperature.
But while it appears that the M1 is the same across the models, there is one machine which has a slight variant. The cheapest MacBook Air for some reason has an M1 with a 7-core GPU. And all of the other machines have an 8-core GPU.
So are all M1 chips the same? Does the "7-core GPU" variant actually have 8-cores, but one's switched off? Or did they literally make two options of the same chip, with 1 GPU core being the difference? If they are physically different, is does M1 represent a chip family?
Is CPU configuration now dead?
With the new M1s being the same, apart from the weird MacBook Air situation, there is now one less thing you can configure when purchasing a Mac.
Sure, you have the option of a 7-core or 8-core GPU on your MacBook Air, but this is not configurable in the same way that memory and storage are.
Maybe from now on, the chip will determine the model. And if Apple does start to separate Mac models by chip variants, will we ever be told more about them apart from the number of cores and the iteration?
What chip will be in the next tier of Macs?
Even if we class the Mac mini, MacBook Air, and MacBook 13" models as being transitioned to Apple Silicon, there are still four more models that run exclusively on Intel chips, the MacBook Pro 16", iMac, iMac Pro, and the Mac Pro.
I think they will obviously feature higher performant chips than the current M1 chips that are available. But I wonder how far they will go, and at what rate. Because although the MacBook Pro 16" is a laptop, it's the high-end model, and will therefore need to be much more powerful than the 13".
But when it comes to the other three models, they all have one benefit over the laptops, in that they have a constant power source. And the Mac Pro can go even further due to it's larger size.
Apple said they wanted to transition the whole Mac platform to Apple Silicon in around 2 years. But I wonder if this means only having Apple Silicon Macs available, or just by having an Apple Silicon option of every Mac, while still selling various Intel variants.
How many chip variants will Apple sell at once?
This isn't exactly a major question, but it will be interesting to see how many Apple Silicon chips will be available to buy at a single time.
When the whole platform has transitioned, I wonder if at one point they will all run the same M class chip with variants on certain models. And at what rate are they upgraded?
The iPhone chips are updated every year, so it will be good to see the same behaviour for M chips. Although would that mean every Mac gets updated every year? Or just certain models?
Is the memory limit a problem?
The Macs that have the M1 chip are all limited to a maximum 16GB memory. That doesn't seem great to me, since the Intel MacBook Pro 13" supports up to 32GB memory, double its replacement.
Maybe this is a technical limitation? I thought initially that it was a limitation from the M1 chip, but I've also seen suggestions that it's due to the type of memory, or even due to the heat generated from larger amounts of memory. So it could even be a product decision.
And although the limit is pretty small, will it actually be a problem? iPhones have much less ram than Android phones, and they're by no means slow. So maybe the tight integration of Apple Silicon and macOS will create the same benefit, and memory will go further on Apple Silicon than an Intel equivalent.
These are the questions I have right now, and I bet there's a load more that others want to be answered too. We'll simply have to wait and see what happens.