7 Binary Options – Free Binary Options

GUI v0.16.0.2 'Nitrogen Nebula' released!

This is the GUI v0.16.0.2 'Nitrogen Nebula' point release. This release predominantly features bug fixes and performance improvements.

(Direct) download links

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 8e3ce10997ab50eec2ec3959846d61b1eb3cb61b583c9f0f9f5cc06f63aaed14 monero-android-armv7-v0.16.0.1.tar.bz2 d9e885b3b896219580195fa4c9a462eeaf7e9f7a6c8fdfae209815682ab9ed8a monero-android-armv8-v0.16.0.1.tar.bz2 4f4a2c761b3255027697cd57455f5e8393d036f225f64f0e2eff73b82b393b50 monero-freebsd-x64-v0.16.0.1.tar.bz2 962f30701ef63a133a62ada24066a49a2211cd171111828e11f7028217a492ad monero-linux-armv7-v0.16.0.1.tar.bz2 83c21fe8bb5943c4a4c77af90980a9c3956eea96426b4dea89fe85792cc1f032 monero-linux-armv8-v0.16.0.1.tar.bz2 4615b9326b9f57565193f5bfe092c05f7609afdc37c76def81ee7d324cb07f35 monero-linux-x64-v0.16.0.1.tar.bz2 3e4524694a56404887f8d7fedc49d5e148cbf15498d3ee18e5df6338a86a4f68 monero-linux-x86-v0.16.0.1.tar.bz2 d226c704042ff4892a7a96bb508b80590a40173683101db6ad3a3a9e20604334 monero-mac-x64-v0.16.0.1.tar.bz2 851b57ec0783d191f0942232e431aedfbc2071125b1bd26af9356c7b357ab431 monero-win-x64-v0.16.0.1.zip e944d15b98fcf01e54badb9e2d22bae4cd8a28eda72c3504a8156ee30aac6b0f monero-win-x86-v0.16.0.1.zip # ## GUI d35c05856e669f1172207cbe742d90e6df56e477249b54b2691bfd5c5a1ca047 monero-gui-install-win-x64-v0.16.0.2.exe 9ff8c91268f8eb027bd26dcf53fda5e16cb482815a6d5b87921d96631a79f33f monero-gui-linux-x64-v0.16.0.2.tar.bz2 142a1e8e67d80ce2386057e69475aa97c58ced30f0ece3f4b9f5ea5b62e48419 monero-gui-mac-x64-v0.16.0.2.tar.bz2 6e0efb25d1f5c45a4527c66ad6f6c623c08590d7c21de5a611d8c2ff0e3fbb55 monero-gui-win-x64-v0.16.0.2.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl8JaBMACgkQ8K9NRioL 35IKbhAAnmfm/daG2K+llRBYmNkQczmVbivbu9JLDNnbYvGuTVH94PSFC/6K7nnE 8EkiLeIVtBBlyr4rK288xSJQt+BMVM93LtzHfA9bZUbZkjj2le+KN8BHcmgEImA8 Qm2OPgr7yrxvb3aD5nQUDoaeQSmnkLCpN2PLbNGymOH0+IVl1ZYjY7pUSsJZQGvC ErLxZSN5TWvX42LcpyBD3V7//GBOQ/gGpfB9fB0Q5LgXOCLlN2OuQJcYY5KV3H+X BPp9IKKJ0OUGGm0j7mi8OvHxTO4cbHjU8NdbtXy8OnPkXh24MEwACaG1HhiNc2xl LhzMSoMOnVbRkLLtIyfDC3+PqO/wSxVamphKplEncBXN28AakyFFYOWPlTudacyi SvudHJkRKdF0LVIjXOzxBoRBGUoJyyMssr1Xh67JA+E0fzY3Xm9zPPp7+Hp0Pe4H ZwT7WJAKoA6GqNpw7P6qg8vAImQQqoyMg51P9Gd+OGEo4DiA+Sn5r2YQcKY5PWix NlBTKq5JlVfRjE1v/8lUzbe+Hq10mbuxIqZaJ4HnWecifYDd0zmfQP1jt7xsTCK3 nxHb9Tl1jVdIuu2eCqGTG+8O9ofjVDz3+diz6SnpaSUjuws218QCZGPyYxe91Tz8 dCrf41FMHYhO+Lh/KHFt4yf4LKc0c048BoVUg6O0OhNIDTsvd/k= =akVA -----END PGP SIGNATURE----- 

Upgrading

Note that, once the DNS records are upgraded, you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Release notes

Point release:
  • Fix bug that inhibited Ledger Monero users from properly sending transactions containing multiple inputs.
  • CMake improvements
  • Minor security relevant fixes
  • Various bug fixes
Major release:
  • Simple mode: node selction algorithm improved
  • UX: display estimated transaction fee
  • UX: add update dialog with download and verify functionality
  • UX: implement autosave feature
  • UI: redesign advanced options on transfer page
  • UI: improve daemon sync progress bar
  • UI: new language sidebar
  • UI: new processing splash design
  • UI: redesign settings page
  • Trezor: support new passphrase entry mechanism
  • Wizard: add support for seed offset
  • Dandelion++
  • Major Bulletproofs verification performance optimizations
  • Various bug fixes and performance improvements
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.6.0 of the Ledger Monero App is required in order to properly use GUI v0.16.

Guides on how to get started

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Guides to resolve common issues

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
I forgot to upgrade (from CLI or GUI v0.13 to CLI or GUI v0.14) and, as a result, accidentally synced to the wrong (alternative) chain
I forgot to upgrade (from CLI or GUI v0.13 to CLI or GUI v0.14) and created / performed a transaction on the wrong (alternative) chain
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

The Next Processor Change is Within ARMs Reach

As you may have seen, I sent the following Tweet: “The Apple ARM MacBook future is coming, maybe sooner than people expect” https://twitter.com/choco_bit/status/1266200305009676289?s=20
Today, I would like to further elaborate on that.
tl;dr Apple will be moving to Arm based macs in what I believe are 4 stages, starting around 2015 and ending around 2023-2025: Release of T1 chip Macbooks, release of T2 chip Macbooks, Release of at least one lower end model Arm Macbook, and transitioning full lineup to Arm. Reasons for each are below.
Apple is very likely going to switch to switch their CPU platform to their in-house silicon designs with an ARM architecture. This understanding is a fairly common amongst various Apple insiders. Here is my personal take on how this switch will happen and be presented to the consumer.
The first question would likely be “Why would Apple do this again?”. Throughout their history, Apple has already made two other storied CPU architecture switches - first from the Motorola 68k to PowerPC in the early 90s, then from PowerPC to Intel in the mid 2000s. Why make yet another? Here are the leading reasons:
A common refrain heard on the Internet is the suggestion that Apple should switch to using CPUs made by AMD, and while this has been considered internally, it will most likely not be chosen as the path forward, even for their megalithic giants like the Mac Pro. Even though AMD would mitigate Intel’s current set of problems, it does nothing to help the issue of the x86_64 architecture’s problems and inefficiencies, on top of jumping to a platform that doesn’t have a decade of proven support behind it. Why spend a lot of effort re-designing and re- optimizing for AMD’s platform when you can just put that effort into your own, and continue the vertical integration Apple is well-known for?
I believe that the internal development for the ARM transition started around 2015/2016 and is considered to be happening in 4 distinct stages. These are not all information from Apple insiders; some of these these are my own interpretation based off of information gathered from supply-chain sources, examination of MacBook schematics, and other indicators from Apple.

Stage1 (from 2014/2015 to 2017):

The rollout of computers with Apple’s T1 chip as a coprocessor. This chip is very similar to Apple’s T8002 chip design, which was used for the Apple Watch Series 1 and Series 2. The T1 is primarily present on the first TouchID enabled Macs, 2016 and 2017 model year MacBook Pros.
Considering the amount of time required to design and validate a processor, this stage most likely started around 2014 or 2015, with early experimentation to see whether an entirely new chip design would be required, or if would be sufficient to repurpose something in the existing lineup. As we can see, the general purpose ARM processors aren’t a one- trick pony.
To get a sense of the decision making at the time, let’s look back a bit. The year is 2016, and we're witnessing the beginning of stagnation of Intel processor lineup. There is not a lot to look forward to other than another “+” being added to the 14nm fabrication process. The MacBook Pro has used the same design for many years now, and its age is starting to show. Moving to AMD is still very questionable, as they’ve historically not been able to match Intel’s performance or functionality, especially at the high end, and since the “Ryzen” lineup is still unreleased, there is absolutely no benchmarks or other data to show they are worth consideration, and AMD’s most recent line of “Bulldozer” processors were very poorly received. Now is probably as good a time as any to begin experimenting with the in-house ARM designs, but it’s not time to dive into the deep end yet, our chips are not nearly mature enough to compete, and it’s not yet certain how long Intel will be stuck in the mud. As well, it is widely understood that Apple and Intel have an exclusivity contract in exchange for advantageous pricing. Any transition would take considerable time and effort, and since there are no current viable alternative to Intel, the in-house chips will need to advance further, and breaching a contract with Intel is too great a risk. So it makes sense to start with small deployments, to extend the timeline, stretch out to the end of the contract, and eventually release a real banger of a Mac.
Thus, the 2016 Touch Bar MacBooks were born, alongside the T1 chip mentioned earlier. There are good reasons for abandoning the piece of hardware previously used for a similar purpose, the SMC or System Management Controller. I suspect that the biggest reason was to allow early analysis of the challenges that would be faced migrating Mac built- in peripherals and IO to an ARM-based controller, as well as exploring the manufacturing, power, and performance results of using the chips across a broad deployment, and analyzing any early failure data, then using this to patch any issues, enhance processes, and inform future designs looking towards the 2nd stage.
The former SMC duties now moved to T1 includes things like
The T1 chip also communicates with a number of other controllers to manage a MacBook’s behavior. Even though it’s not a very powerful CPU by modern standards, it’s already responsible for a large chunk of the machine’s operation. Moving control of these peripherals to the T1 chip also brought about the creation of the fabled BridgeOS software, a shrunken-down watchOS-based system that operates fully independently of macOS and the primary Intel processor.
BridgeOS is the first step for Apple’s engineering teams to begin migrating underlying systems and services to integrate with the ARM processor via BridgeOS, and it allowed internal teams to more easily and safely develop and issue firmware updates. Since BridgeOS is based on a standard and now well-known system, it means that they can leverage existing engineering expertise to flesh out the T1’s development, rather than relying on the more arcane and specialized SMC system, which operates completely differently and requires highly specific knowledge to work with. It also allows reuse of the same fabrication pipeline used for Apple Watch processors, and eliminated the need to have yet another IC design for the SMC, coming from a separate source, to save a bit on cost.
Also during this time, on the software side, “Project Marzipan”, today Catalyst, came into existence. We'll get to this shortly.
For the most part, this Stage 1 went without any major issues. There were a few firmware problems at first during the product launch, but they were quickly solved with software updates. Now that engineering teams have had experience building for, manufacturing, and shipping the T1 systems, Stage 2 would begin.

Stage2 (2018-Present):

Stage 2 encompasses the rollout of Macs with the T2 coprocessor, replacing the T1. This includes a much wider lineup, including MacBook Pro with Touch Bar, starting with 2018 models, MacBook Air starting with 2018 models, the iMac Pro, the 2019 Mac Pro, as well as Mac Mini starting in 2018.
With this iteration, the more powerful T8012 processor design was used, which is a further revision of the T8010 design that powers the A10 series processors used in the iPhone 7. This change provided a significant increase in computational ability and brought about the integration of even more devices into T2. In addition to the T1’s existing responsibilities, T2 now controls:
Those last 2 points are crucial for Stage 2. Under this new paradigm, the vast majority of the Mac is now under the control of an in-house ARM processor. Stage 2 also brings iPhone-grade hardware security to the Mac. These T2 models also incorporated a supported DFU (Device Firmware Update, more commonly “recovery mode”), which acts similarly to the iPhone DFU mode and allows restoration of the BridgeOS firmware in the event of corruption (most commonly due to user-triggered power interruption during flashing).
Putting more responsibility onto the T2 again allows for Apple’s engineering teams to do more early failure analysis on hardware and software, monitor stability of these machines, experiment further with large-scale production and deployment of this ARM platform, as well as continue to enhance the silicon for Stage 3.
A few new user-visible features were added as well in this stage, such as support for the passive “Hey Siri” trigger, and offloading image and video transcoding to the T2 chip, which frees up the main Intel processor for other applications. BridgeOS was bumped to 2.0 to support all of these changes and the new chip.
On the macOS software side, what was internally known as Project Marzipan was first demonstrated to the public. Though it was originally discovered around 2017, and most likely began development and testing within later parts of Stage 1, its effects could be seen in 2018 with the release of iPhone apps, now running on the Mac using the iOS SDKs: Voice Recorder, Apple News, Home, Stocks, and more, with an official announcement and public release at WWDC in 2019. Catalyst would come to be the name of Marzipan used publicly. This SDK release allows app developers to easily port iOS apps to run on macOS, with minimal or no code changes, and without needing to develop separate versions for each. The end goal is to allow developers to submit a single version of an app, and allow it to work seamlessly on all Apple platforms, from Watch to Mac. At present, iOS and iPadOS apps are compiled for the full gamut of ARM instruction sets used on those devices, while macOS apps are compiled for x86_64. The logical next step is to cross this bridge, and unify the instruction sets.
With this T2 release, the new products using it have not been quite as well received as with the T1. Many users have noticed how this change contributes further towards machines with limited to no repair options outside of Apple’s repair organization, as well as some general issues with bugs in the T2.
Products with the T2 also no longer have the “Lifeboat” connector, which was previously present on 2016 and 2017 model Touch Bar MacBook Pro. This connector allowed a certified technician to plug in a device called a CDM Tool (Customer Data Migration Tool) to recover data off of a machine that was not functional. The removal of this connector limits the options for data recovery in the event of a problem, and Apple has never offered any data recovery service, meaning that a irreparable failure of the T2 chip or the primary board would result in complete data loss, in part due to the strong encryption provided by the T2 chip (even if the data got off, the encryption keys were lost with the T2 chip). The T2 also brought about the linkage of component serial numbers of certain internal components, such as the solid state storage, display, and trackpad, among other components. In fact, many other controllers on the logic board are now also paired to the T2, such as the WiFi and Bluetooth controller, the PMIC (Power Management Controller), and several other components. This is the exact same system used on newer iPhone models and is quite familiar to technicians who repair iPhone logic boards. While these changes are fantastic for device security and corporate and enterprise users, allowing for a very high degree of assurance that devices will refuse to boot if tampered with in any way - even from storied supply chain attacks, or other malfeasance that can be done with physical access to a machine - it has created difficulty with consumers who more often lack the expertise or awareness to keep critical data backed up, as well as the funds to perform the necessary repairs from authorized repair providers. Other issues reported that are suspected to be related to T2 are audio “cracking” or distortion on the internal speakers, and the BridgeOS becoming corrupt following a firmware update resulting in a machine that can’t boot.
I believe these hiccups will be properly addressed once macOS is fully integrated with the ARM platform. This stage of the Mac is more like a chimera of an iPhone and an Intel based computer. Technically, it does have all of the parts of an iPhone present within it, cellular radio aside, and I suspect this fusion is why these issues exist.
Recently, security researchers discovered an underlying security problem present within the Boot ROM code of the T1 and T2 chip. Due to being the same fundamental platform as earlier Apple Watch and iPhone processors, they are vulnerable to the “checkm8” exploit (CVE-2019-8900). Because of how these chips operate in a Mac, firmware modifications caused by use of the exploit will persist through OS reinstallation and machine restarts. Both the T1 and T2 chips are always on and running, though potentially in a heavily reduced power usage state, meaning the only way to clean an exploited machine is to reflash the chip, triggering a restart, or to fully exhaust or physically disconnect the battery to flush its memory. Fortunately, this exploit cannot be done remotely and requires physical access to the Mac for an extended duration, as well as a second Mac to perform the change, so the majority of users are relatively safe. As well, with a very limited execution environment and access to the primary system only through a “mailbox” protocol, the utility of exploiting these chips is extremely limited. At present, there is no known malware that has used this exploit. The proper fix will come with the next hardware revision, and is considered a low priority due to the lack of practical usage of running malicious code on the coprocessor.
At the time of writing, all current Apple computers have a T2 chip present, with the exception of the 2019 iMac lineup. This will change very soon with the expected release of the 2020 iMac lineup at WWDC, which will incorporate a T2 coprocessor as well.
Note: from here on, this turns entirely into speculation based on info gathered from a variety of disparate sources.
Right now, we are in the final steps of Stage 2. There are strong signs that an a MacBook (12”) with an ARM main processor will be announced this year at WWDC (“One more thing...”), at a Fall 2020 event, Q1 2021 event, or WWDC 2021. Based on the lack of a more concrete answer, WWDC2020 will likely not see it, but I am open to being wrong here.

Stage3 (Present/2021 - 2022/2023):

Stage 3 involves the first version of at least one fully ARM-powered Mac into Apple’s computer lineup.
I expect this will come in the form of the previously-retired 12” MacBook. There are rumors that Apple is still working internally to perfect the infamous Butterfly keyboard, and there are also signs that Apple is developing an A14x based processors with 8-12 cores designed specifically for use as the primary processor in a Mac. It makes sense that this model could see the return of the Butterfly keyboard, considering how thin and light it is intended to be, and using an A14x processor would make it will be a very capable, very portable machine, and should give customers a good taste of what is to come.
Personally, I am excited to test the new 12" “ARMbook”. I do miss my own original 12", even with all the CPU failure issues those older models had. It was a lovely form factor for me.
It's still not entirely known whether the physical design of these will change from the retired version, exactly how many cores it will have, the port configuration, etc. I have also heard rumors about the 12” model possibly supporting 5G cellular connectivity natively thanks to the A14 series processor. All of this will most likely be confirmed soon enough.
This 12” model will be the perfect stepping stone for stage 3, since Apple’s ARM processors are not yet a full-on replacement for Intel’s full processor lineup, especially at the high end, in products such as the upcoming 2020 iMac, iMac Pro, 16” MacBook Pro, and the 2019 Mac Pro.
Performance of Apple’s ARM platform compared to Intel has been a big point of contention over the last couple years, primarily due to the lack of data representative of real-world desktop usage scenarios. The iPad Pro and other models with Apple’s highest-end silicon still lack the ability to execute a lot of high end professional applications, so data about anything more than video editing and photo editing tasks benchmarks quickly becomes meaningless. While there are completely synthetic benchmarks like Geekbench, Antutu, and others, to try and bridge the gap, they are very far from being accurate or representative of the real real world performance in many instances. Even though the Apple ARM processors are incredibly powerful, and I do give constant praise to their silicon design teams, there still just isn’t enough data to show how they will perform for real-world desktop usage scenarios, and synthetic benchmarks are like standardized testing: they only show how good a platform is at running the synthetic benchmark. This type of benchmark stresses only very specific parts of each chip at a time, rather than how well it does a general task, and then boil down the complexity and nuances of each chip into a single numeric score, which is not a remotely accurate way of representing processors with vastly different capabilities and designs. It would be like gauging how well a person performs a manual labor task based on averaging only the speed of every individual muscle in the body, regardless of if, or how much, each is used. A specific group of muscles being stronger or weaker than others could wildly skew the final result, and grossly misrepresent performance of the person as a whole. Real world program performance will be the key in determining the success and future of this transition, and it will have to be great on this 12" model, but not just in a limited set of tasks, it will have to be great at *everything*. It is intended to be the first Horseman of the Apocalypse for the Intel Mac, and it better behave like one. Consumers have been expecting this, especially after 15 years of Intel processors, the continued advancement of Apple’s processors, and the decline of Intel’s market lead.
The point of this “demonstration” model is to ease both users and developers into the desktop ARM ecosystem slowly. Much like how the iPhone X paved the way for FaceID-enabled iPhones, this 12" model will pave the way towards ARM Mac systems. Some power-user type consumers may complain at first, depending on the software compatibility story, then realize it works just fine since the majority of the computer users today do not do many tasks that can’t be accomplished on an iPad or lower end computer. Apple needs to gain the public’s trust for basic tasks first, before they will be able to break into the market of users performing more hardcore or “Pro” tasks. This early model will probably not be targeted at these high-end professionals, which will allow Apple to begin to gather early information about the stability and performance of this model, day to day usability, developmental issues that need to be addressed, hardware failure analysis, etc. All of this information is crucial to Stage 4, or possibly later parts of Stage 3.
The 2 biggest concerns most people have with the architecture change is app support and Bootcamp.
Any apps released through the Mac App Store will not be a problem. Because App Store apps are submitted as LLVM IR (“Bitcode”), the system can automatically download versions compiled and optimized for ARM platforms, similar to how App Thinning on iOS works. For apps distributed outside the App Store, thing might be more tricky. There are a few ways this could go:
As for Bootcamp, while ARM-compatible versions of Windows do exist and are in development, they come with their own similar set of app support problems. Microsoft has experimented with emulating x86_64 on their ARM-based Surface products, and some other OEMs have created their own Windows-powered ARM laptops, but with very little success. Performance is a problem across the board, with other ARM silicon not being anywhere near as advanced, and with the majority of apps in the Windows ecosystem that were not developed in-house at Microsoft running terribly due to the x86_64 emulation software. If Bootcamp does come to the early ARM MacBook, it more than likely will run like very poorly for anything other than Windows UWP apps. There is a high chance it will be abandoned entirely until Windows becomes much more friendly to the architecture.
I believe this will also be a very crucial turning point for the MacBook lineup as a whole. At present, the iPad Pro paired with the Magic Keyboard is, in many ways, nearly identical to a laptop, with the biggest difference being the system software itself. While Apple executives have outright denied plans of merging the iPad and MacBook line, that could very well just be a marketing stance, shutting the down rumors in anticipation of a well-executed surprise. I think that Apple might at least re-examine the possibility of merging Macs and iPads in some capacity, but whether they proceed or not could be driven by consumer reaction to both products. Do they prefer the feel and usability of macOS on ARM, and like the separation of both products? Is there success across the industry of the ARM platform, both at the lower and higher end of the market? Do users see that iPadOS and macOS are just 2 halves of the same coin? Should there be a middle ground, and a new type of product similar to the Surface Book, but running macOS? Should Macs and iPads run a completely uniform OS? Will iPadOS ever see exposed the same sort of UNIX-based tools for IT administrators and software developers that macOS has present? These are all very real questions that will pop up in the near future.
The line between Stage 3 and Stage 4 will be blurry, and will depend on how Apple wishes to address different problems going forward, and what the reactions look like. It is very possible that only 12” will be released at first, or a handful more lower end model laptop and desktop products could be released, with high performance Macs following in Stage 4, or perhaps everything but enterprise products like Mac Pro will be switched fully. Only time will tell.

Stage 4 (the end goal):

Congratulations, you’re made it to the end of my TED talk. We are now well into the 2020s and COVID-19 Part 4 is casually catching up to the 5G = Virus crowd. All Macs have transitioned fully to ARM. iMac, MacBooks Pro and otherwise, Mac Pro, Mac Mini, everything. The future is fully Apple from top to bottom, and vertical integration leading to market dominance continues. Many other OEM have begun to follow in this path to some extent, creating more demand for a similar class of silicon from other firms.
The remainder here is pure speculation with a dash of wishful thinking. There are still a lot of things that are entirely unclear. The only concrete thing is that Stage 4 will happen when everything is running Apple’s in- house processors.
By this point, consumers will be quite familiar with the ARM Macs existing, and developers have had have enough time to transition apps fully over to the newly unified system. Any performance, battery life, or app support concerns will not be an issue at this point.
There are no more details here, it’s the end of the road, but we are left with a number of questions.
It is unclear if Apple will stick to AMD's GPUs or whether they will instead opt to use their in-house graphics solutions that have been used since the A11 series of processors.
How Thunderbolt support on these models of Mac will be achieved is unknown. While Intel has made it openly available for use, and there are plans to have USB and Thunderbolt combined in a single standard, it’s still unclear how it will play along with Apple processors. Presently, iPhones do support connecting devices via PCI Express to the processor, but it has only been used for iPhone and iPad storage. The current Apple processors simply lack the number of lanes required for even the lowest end MacBook Pro. This is an issue that would need to be addressed in order to ship a full desktop-grade platform.
There is also the question of upgradability for desktop models, and if and how there will be a replaceable, socketed version of these processors. Will standard desktop and laptop memory modules play nicely with these ARM processors? Will they drop standard memory across the board, in favor of soldered options, or continue to support user-configurable memory on some models? Will my 2023 Mac Pro play nicely with a standard PCI Express device that I buy off the shelf? Will we see a return of “Mac Edition” PCI devices?
There are still a lot of unknowns, and guessing any further in advance is too difficult. The only thing that is certain, however, is that Apple processors coming to Mac is very much within arm’s reach.
submitted by Fudge_0001 to apple [link] [comments]

another take on Getting into Devops as a Beginner

I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.

Background

While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it.
Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin.
Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.

Certifications

People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience.
Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.

Tools and Experimentation

While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them.
Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.

Programming Languages

Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.

Expanding your knowledge

As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level.
The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them.
Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
submitted by jamabake to devops [link] [comments]

Ethereum on ARM. Raspberry Pi 4 images release based on Ubuntu 20.04 64 bit. Turn your Raspberry Pi 4 into an Eth 1.0 or Eth 2.0 node just by flashing the MicroSD card. Memory issues solved and new monitoring dashboards. Installation guide.

TL;DR: Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to turn the Raspberry Pi 4 into a full Ethereum 1.0 node or an Ethereum 2.0 node (beacon chain / validator)
Some background first. As you know, we’ve been running into some memory issues [1] with the Raspberry Pi 4 image as Raspbian OS is still on 32bits [2] (at least the userland). While we prefer to stick with the official OS we came to the conclusion that, in order to solve these issues, we need to migrate to a native 64 bits OS
Besides, Eth 2.0 clients don’t support 32 bits binaries so using Raspbian would exclude the Raspberry Pi 4 from running an Eth 2.0 node (and the possibility of staking).
So, after several tests we are now releasing 2 different images based on Ubuntu 20.04 64bit [3]: Eth 1.0 and Eth 2.0 editions.
Basically, both are the same image and include the same features of the Raspbian based images. But they are setup for running Eth 1.0 or Eth 2.0 software by default
Images take care of all the necessary steps, from setting up the environment and formatting the SSD disk to installing and running the Ethereum software as well as starting the blockchain synchronization.

Main features

Software included

Both images include the same packages, the only difference between them is that Eth 1.0 runs Geth by default and Eth 2.0 runs Prysm beacon chain by default.
Ethereum 1.0 clients
Ethereum 2.0 clients
Ethereum framework

INSTALLATION GUIDE AND USAGE

Recommended hardware and setup

Storage

You will need and SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options:
In both cases, avoid getting low quality SSD disks as it is a key component of you node and it can drastically affect the performance (and sync times)
Keep in mind that you need to plug the disk to an USB 3.0 port (blue)

Image download & installation

1.- Download Eth 1.0 or Eth 2.0 images:
ubuntu-20.04-preinstalled-server-arm64+raspi-eth1.img.zip
sha256 34f105201482279a5e83decd265bd124d167b0fefa43bc05e4268ff899b46f19
ubuntu-20.04-preinstalled-server-arm64+raspi-eth2.img.zip
sha256 74c0c15b708720e5ae5cac324f1afded6316537fb17166109326755232cd316e
2.- Flash the image
Insert the microSD in your Desktop / Laptop and download the file (Eth 1.0, for instance):
wget https://ethraspbian.com/downloads/ubuntu-20.04-preinstalled-server-arm64+raspi-eth1.img.zip 
Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher (https://etcher.io)
Open a terminal and check your MicroSD device name running:
sudo fdisk -l 
You should see a device named mmcblk0 or sdd. Unzip and flash the image:
unzip ubuntu-20.04-preinstalled-server-arm64+raspi-eth1.img.zip sudo dd bs=1M if=ubuntu-20.04-preinstalled-server-arm64+raspi-eth1.img of=/dev/mmcblk0 && sync 
3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port).
4.- Power on the device
The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 10 minutes in order to allow the script to perform the necessary tasks to turn the device into an Ethereum node and reboot the Raspberry.
Depending on the image, you will be running:
5.- Log in
You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum 
You will be prompted to change the password on first login, so you will need to login twice.
6.- Open 30303 port for Geth and 13000 if you are running Prysm beacon chain. If you don’t know how to do this, google “port forwarding” followed by your router model.
7.- Getting console output
You can see what’s happening in the background by typing:
sudo tail -f /valog/syslog 
Congratulations. You are now running a full Ethereum node on your Raspberry Pi 4.

Syncing the Blockchain

Now you need to wait for the blockchain to be synced. In the case of Eth 1.0 This will take a few days depending on several factors but you can expect up to about 5-7 days.
If you are running the Eth 2.0 Topaz tesnet you can expect 1-2 days of Beacon chain synchronization time. Remember that you will need to setup the validator later in order to start the staking process (see “How to run the Eth 2.0 validator” section below).

Monitoring dashboards

For this first release, we included 3 monitoring dashboards based on Prometheus [5] / Grafana [6] in order to monitor the node and clients’ data (Geth and Besu). You can access through your web browser:
URL: http://your_raspberrypi_IP:3000 User: admin Password: ethereum 

Switching clients

All clients run as a systemd service. This is important because in case of some problem arises the system will respawn the process automatically.
Geth and Prysm beacon chain run by default (depending on what you are synchronizing, Eth 1.0 or Eth 2.0) so, if you want to switch to other clients (from Geth to Nethermind, for instance), you need to stop and disable Geth first, and enable and start the other client:
sudo systemctl stop geth && sudo systemctl disable geth 
Commands to enable and start each Eth 1.0 client:
sudo systemctl enable besu && sudo systemctl start besu sudo systemctl enable nethermind && sudo systemctl start nethermind sudo systemctl enable parity && sudo systemctl start parity 
Eth 2.0:
sudo systemctl stop prysm-beacon && sudo systemctl disable prysm-beacon sudo systemctl start lighthouse && sudo systemctl enable lighthouse 

Changing parameters

Clients’ config files are located in the /etc/ethereum/ directory. You can edit these files and restart the systemd service in order for the changes to take effect. The only exception is Nethermind which, additionally, has a mainnet config file located here:
/etc/nethermind/configs/mainnet.cfg 
Blockchain clients’ data is stored on the ethereum home account as follows (note the dot before the directory name):
Eth 1.0
/home/ethereum/.geth /home/ethereum/.parity /home/ethereum/.besu /home/ethereum/.nethermind 
Eth2.0
/home/ethereum/.eth2 /home/ethereum/.eth2validators /home/ethereum/.lighthouse Hyperledger Besu and Nethermind 

Nethermind and Hyperledger Besu

These 2 great Eth 1.0 clients have become a great alternative to Geth and Parity. The more diversity in the network, the better, so you may give them a try and contribute to the network health.
Both need further testing so feel free to play with them and report back your feedback.

How to run the Eth 2.0 validator (staking)

Once the Topaz testnet beacon chain is synchronized you can run a validator in the same device. You will need to follow the steps described here:
https://prylabs.net/participate
The first time, you need to create manually an account by running the “validator” binary and setup a password. Once you completed this step you can add the password to /etc/ethereum/prysm-validator.conf and start the validator as a systemd service

Feeback appreciated

We put a lot of work trying to setup the Raspberry Pi 4 as a full Ethereum node as we know the massive user base of this device may have a very positive impact in the network.
Please, take into account that this is the first image based on Ubuntu 20.04 so there may be some bugs. If so, open an issue on Github or reach us on twitter (https://twitter.com/EthereumOnARM).

References

  1. https://github.com/ethereum/go-ethereum/issues/20190
  2. https://github.com/diglos/pi-gen
  3. https://ubuntu.com/download/raspberry-pi
  4. https://en.wikipedia.org/wiki/Port_forwarding
  5. https://prometheus.io
  6. https://grafana.com
  7. https://forum.armbian.com/topic/5565-zram-vs-swap/
  8. https://geth.ethereum.org
  9. https://github.com/openethereum/openethereum
  10. https://nethermind.io
  11. https://www.hyperledger.org/projects/besu
  12. https://github.com/prysmaticlabs/prysm
  13. https://lighthouse.sigmaprime.io
  14. https://ethersphere.github.io/swarm-home
  15. https://raiden.network
  16. https://ipfs.io
  17. https://status.im
  18. https://vipnode.org
submitted by diglos76 to ethereum [link] [comments]

MAME 0.222

MAME 0.222

MAME 0.222, the product of our May/June development cycle, is ready today, and it’s a very exciting release. There are lots of bug fixes, including some long-standing issues with classics like Bosconian and Gaplus, and missing pan/zoom effects in games on Seta hardware. Two more Nintendo LCD games are supported: the Panorama Screen version of Popeye, and the two-player Donkey Kong 3 Micro Vs. System. New versions of supported games include a review copy of DonPachi that allows the game to be paused for photography, and a version of the adult Qix game Gals Panic for the Taiwanese market.
Other advancements on the arcade side include audio circuitry emulation for 280-ZZZAP, and protection microcontroller emulation for Kick and Run and Captain Silver.
The GRiD Compass series were possibly the first rugged computers in the clamshell form factor, possibly best known for their use on NASA space shuttle missions in the 1980s. The initial model, the Compass 1101, is now usable in MAME. There are lots of improvements to the Tandy Color Computer drivers in this release, with better cartridge support being a theme. Acorn BBC series drivers now support Solidisk file system ROMs. Writing to IMD floppy images (popular for CP/M computers) is now supported, and a critical bug affecting writes to HFE disk images has been fixed. Software list additions include a collection of CDs for the SGI MIPS workstations.
There are several updates to Apple II emulation this month, including support for several accelerators, a new IWM floppy controller core, and support for using two memory cards simultaneously on the CFFA2. As usual, we’ve added the latest original software dumps and clean cracks to the software lists, including lots of educational titles.
Finally, the memory system has been optimised, yielding performance improvements in all emulated systems, you no longer need to avoid non-ASCII characters in paths when using the chdman tool, and jedutil supports more devices.
There were too many HyperScan RFID cards added to the software list to itemise them all here. You can read about all the updates in the whatsnew.txt file, or get the source and 64-bit Windows binary packages from the download page.

MAME Testers Bugs Fixed

New working machines

New working clones

Machines promoted to working

Clones promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING