A.I. & The Internet Of Thoughts...
Alchemical Tech RevolutionJanuary 23, 202501:30:5383.02 MB

A.I. & The Internet Of Thoughts...

What's the endgame of the $500 Billion Stargate A.I. Initiative? How does it relate to the medical system, transhumanism, and the mapping of the human brain? What do vaccines have to do with all of this? The answers may shock you...

Link to peer-reviewed study referenced:

https://www.researchgate.net/publication/384844917_At_Least_55_Undeclared_Chemical_Elements_Found_in_COVID-19_Vaccines_from_AstraZeneca_CanSino_Moderna_Pfizer_Sinopharm_and_Sputnik_V_with_Precise_ICP-MS


www.alchemicaltechrevolution.com

What's the endgame of the $500 Billion Stargate A.I. Initiative? How does it relate to the medical system, transhumanism, and the mapping of the human brain? What do vaccines have to do with all of this? The answers may shock you...

Link to peer-reviewed study referenced:

https://www.researchgate.net/publication/384844917_At_Least_55_Undeclared_Chemical_Elements_Found_in_COVID-19_Vaccines_from_AstraZeneca_CanSino_Moderna_Pfizer_Sinopharm_and_Sputnik_V_with_Precise_ICP-MS


www.alchemicaltechrevolution.com

[00:00:00] ...lead the world in facing down a threat to decency and humanity. What is at stake is more than one call country. It is a big idea.

[00:00:51] You're listening to The Alchemical Tech Revolution, and I am your host, Wayne McRoy. Good evening, good morning, wherever you are around the world. Tonight, we're going to discuss A.I. and the Internet Of Thoughts. You see, ladies and gentlemen, there are some events rolling out in the world currently today, just within the past couple of days, that are very heavily leaning towards this becoming a reality if the infrastructure thereof has not already begun to be constructed around us.

[00:01:21] And, of course, we're talking about the Internet Of Thoughts and the association with artificial intelligence, and we'll connect some dots for you tonight. We're going to test the bounds of the newly acquired free speech that we allegedly have online, thanks to the signing of an executive order by one Mr. Donald J. Trump.

[00:01:45] And he also signed a big executive order announcing a $500 billion AI infrastructure investment in the U.S. The technocracy, ladies and gentlemen, is coming into full view.

[00:02:05] And they're doing so in some unexpected ways, and things are going on behind the scenes, and most people are not understanding the implications here. It's my job to point out the implications of what's being laid out here, and to go back and look at various different facets of things to point you in the right direction with this. That is my job. That is my challenge here. And we're going to test the waters here.

[00:02:35] I'm going to put this out on YouTube in its entirety, and we'll see if this no-censorship thing really holds true. Because we're going to talk about some extra-crunchy topics here tonight. Extra-crunchy. So, with that being the case, let's take a look first here at an article from CNN Business and Technology

[00:02:58] written by Claire Duffy, updated 10.46 p.m. Eastern Standard Time, Tuesday, January 21, 2025. The three top tech firms on Tuesday announced that they will create a new company called Stargate to grow artificial intelligence infrastructure in the United States.

[00:03:20] OpenAI CEO Sam Altman, Softman Bank CEO Mazayashi Son, and Oracle Chairman Larry Ellison appeared at the White House Tuesday afternoon alongside President Donald Trump to announce the company, which Trump called the largest AI infrastructure project in history. The companies will invest $100 billion in the project to start, with plans to pour up to $500 billion into Stargate in the coming years.

[00:03:50] The project is expected to create 100,000 U.S. jobs, Trump said. Stargate will build the physical and virtual infrastructure to power the next generation of AI, including data centers around the country, Trump said. Ellison said the group's first 1 million square foot data project is already under construction in Texas. AI leaders have for months been sounding the alarm that more data centers,

[00:04:19] as well as the chips and electricity and water resources to run them, are needed to power their artificial intelligence ambitions in the coming years. And I'm going to pause there. And what are their ambitions? Well, they're not being forthright with you what those are, but I'll tell you all about that very soon. So there's a quote here next from Sam Altman. And he says, I think this will be the most important project of this era.

[00:04:46] We wouldn't be able to do this without you, Mr. President. Oracle is among the biggest U.S. data center operators, and SoftBank has the kind of deep pockets needed to fund the expansion of AI infrastructure, which is expected to cost billions of dollars. I'm going to pause. And we're talking major billions of dollars. $500 billion over the next couple of years is what they are intending to sink into AI infrastructure.

[00:05:16] This is a hugely concerning thing. If this is not on your radar, it better be. We've already seen some of the implications of AI in our everyday lives, how things are becoming more and more difficult to ascertain the truth about because of the way AI muddles information. And this is a problem that those researchers and developers of AI have acknowledged. They call it a hallucination.

[00:05:45] This is an actual glitch in the AI, wherein it takes in too much information and too much false information, and it produces an imitation copy of that false information in one way, shape, or form, and misinterprets and misrepresents information given. It's called a hallucination. And I find the term uniquely satisfying for some reason here, that they call it a hallucination because it's all about

[00:06:14] making us accept this artificial synthetic reality, this fantasy-based reality. And that's what AI has largely done thus far. We find things like deep fakes online now that are extremely difficult to ascertain the truth about. Is it the real deal or is it a cheap knockoff? Is it something produced by AI? And I've seen a lot of meme groups lately that will put out some of this AI-generated nonsense

[00:06:43] and show how AI ruins everything. If you haven't watched any of these, these are kind of entertaining. So I recommend check it out. But it's this kind of thing that will lead further to the corruption of data in the future. And we're already beginning to see a lot of that happening. So at any rate here, we have this now call for the development of AI

[00:07:13] here in the United States with this project, Stargate. Stargate. Stargate. Let's break that down on an esoteric level, first and foremost. Stargate. Well, if you've been following this program for any length of time, you may already be connecting the dots in your mind here. You know, if you've followed any of these occult topics for any length of time, within the auspices of the occult and the secret societies, the stars,

[00:07:42] the thousand points of light, well, these are the ascended masters. Or these are some spiritual entities of great power, fallen angelic beings, if you want to go there for some of these types of things. Or spiritual intelligences ascended into the heavens, according to some of the traditions, within the secret societies and within occultism. So these are non-human spirits manifesting in the sky.

[00:08:11] Or some of them are transfigured human spirits, according to those who claim to know things, and they call them ascended masters or some such nonsense like that. But these are spiritual intelligences without corporeal form. Now, gate. What is the gate? What is the gate between worlds? As I've told you time and again, well, the human being is the gateway between worlds, ladies and gentlemen. So the stargate,

[00:08:39] this would be the incorporation of these disembodied spirits into a human form here in physical reality. Your stargate project, your artificial intelligence, being the infrastructure, if you want to use that word, or substrate, for the manifestation of some spiritual entity that does not belong here in this natural creation. Created, of course, by our creator, and designed to work how it works.

[00:09:08] So there you go. A little bit of an esoteric breakdown of Project Stargate. And why would they name something Stargate? in regard to artificial intelligence. Well, I think I revealed to you exactly why just then. But, we'll get there a little bit further on as we go through the program here tonight. Because we're going to be reading from several different sources. And we're going to connect some dots that probably haven't been connected for you. So,

[00:09:38] we could add a little bit of context to what's going on. And what is the monumental importance about this whole AI infrastructure idea that has been now put forward by the Trump administration. Brought to you by the technocracy, of course. And, we'll see, as we go through the material here tonight, the importance of this. Now, something else that has come on my radar

[00:10:07] very recently here has to do with, of course, the much-touted and talked-about COVID vaccines. And we'll connect the dots with that here in a little while. You see, there's a new peer-reviewed study that came out in a scientific journal that tends to validate some of our concerns that many of us have had from the beginning. And like I said, we're testing the waters here. We'll see

[00:10:37] if this gets censored in any way, shape, or form, especially on YouTube. YouTube. My monetization will probably be partial monetization. If it monetizes at all, we'll see. We'll see if the things the Trump administration has promised us are going to come to fruition now. Of course, they could make certain claims about this, that it's community standards or some such thing, and quell your free speech that way. But at any rate,

[00:11:08] we'll see what happens with that. But we're going to look in a couple different places here and show you this is the long game coming into fruition right now and how they always seem to utilize the two-party system to get where they want because many people now are extremely happy with some of the things that Trump is doing already in his first couple days in office. And I could tell you I had noticed some of the more subtle esoteric clues

[00:11:37] in the inauguration ceremonies themselves out there for people with eyes to see to know and to understand what's going on behind the scenes with this. And even though there are some good consequences coming about because of this, quietly in the background, well, we're seeing technocracy snapping into place. And I don't particularly like that. These are the things I've been warning you of publicly since 2017 now. So,

[00:12:07] that being the case, I don't like to see these things come to fruition because it's all aligning in nefarious ways. And most people are completely oblivious of it because once again they've been psychologically manipulated like they have in the past to totally ignore all of the stuff going on in the background and to focus on some other aspects of social issues going on. And you have a lot of people right now

[00:12:36] that are just thrilled with some of the things that Trump has begun to institute in the early days of his administration. Now, like I said, there are some good things coming about, but we need to be aware of that. Is this a Trojan horse? I suspect the answer may be yes because you know it's all deeply funded by technocracy. Look who he's surrounded himself with in recent days.

[00:13:05] Look who was present at his inauguration. Well, it's all these tech billionaires, of course. Of course. It's the technocracy. And you'll see they're all kind of shifting gears now like Zuckerberg out there. All of a sudden he cares about free speech. Oh, he cares. He was bullied, I guess, by the Biden administration into censoring people's free speech when it came to talking about their experiences with this

[00:13:34] COVID shot rollout. And I'm going to just be blatant here and talk about vaccines and stuff. We're testing the waters. Now, if I get some type of retribution from YouTube on this, then know it's all a farce. We'll know it's all a farce because they could still hide behind corporate policies instead and say it's a community standards violation. But we'll see what happens. I'm curious. I want to test the waters. I'll be brave

[00:14:04] enough to do so. I know there's many out there who've already been conditioned into censoring themselves, self-censoring on many of these platforms because of the threat of demonetization or some such thing. But we're going to test the waters here and see what happens with that. But at any rate, as I was alluding to, we'll get to those connection of the dots here. What does AI have to do with health care and all of these other concerns?

[00:14:36] Well, we're going to go in an interesting direction here with this because we're going to read tonight actually from a scientific white paper technical report. And it's titled Whole Brain Emulation, A Roadmap. It was published in 2008 by Anders Sandsberg and Nick Bostrom from the Future of Humanity Institute, the Faculty of Philosophy and the James Martin 21st Century School at

[00:15:06] Oxford University. Now if you've heard of Nick Bostrom, this guy is a major pro- transhumanist. he's one of the major proponents of transhumanism in this world. And he wrote this paper, co-wrote this paper, back in 2008. That's how far ahead of the curve these people were with a lot of this stuff. And we'll see what the correlation and connection is here. Whole Brain

[00:15:35] Emulation, A Roadmap. Let's begin with the introduction on some side tangents, as you know I probably will. Introduction, Whole Brain Emulation, the possible future one-to-one modeling of the function of the human brain, is academically interesting and important for several reasons. Number one reason, which is research. Brain emulation is the logical endpoint of computational

[00:16:05] neuroscience's attempt to accurately model neurons and brain systems. Brain emulation would help us to understand the brain both in the lead-up to successful emulation and afterwards by providing an ideal test bed for neuroscientific experimentation and study. Neuromorphic engineering based on partial results would be useful in a number of applications such as pattern recognition, AI, and brain-computer interfaces. I'm going to pause for a moment there. Notice all of these things are

[00:16:34] mentioned in the very same sentence. Pattern recognition, AI, brain-computer interfaces. This is important. This is spelling out the context already for you. Let's continue on. As a long-term research goal, it might be a strong vision to stimulate computational neuroscience. As a case of future studies, it represents a case where a radical future possibility can be examined in the light of current knowledge. Then he goes on to mention economics being important here.

[00:17:04] The economic impact of copyable brains could be immense and could have profound societal consequences, even low-probability events of such magnitude merit investigation. Individually, if emulation of particular brains is possible and affordable, and if concerns about individual identity can be met, such emulation would enable backup copies and digital immortality. I'm going to pause for a moment here, folks.

[00:17:35] Digital immortality. This is what the whole basis of this transhumanist philosophy culminates in. This is what they want. This is what they are seeking. And I'm here to tell you it's a lie. That is not your spirit. That is not your soul. That is not the ontological self or the I am that will live on. It's a cheap knockoff. But they will do their utmost to convince you that this is a reality.

[00:18:05] philosophy. Let's continue. The other phase he's talking about here is philosophy. Brain emulation would itself be a test of many ideas in the philosophy of mind and the philosophy of identity or provide a novel context for thinking about such ideas. It may represent a radical new form of human enhancement. I'm going to pause for a moment. Human enhancement. That is the ultimate goal of developing artificial intelligence.

[00:18:35] human enhancement. You see they want to combine human intelligence with artificial intelligence to transcend to this next level in evolution that they call the post-human. And this whole philosophy of transhumanism is that gateway to the stars that they seek, the stargate, the transitionary period, the

[00:19:03] transitional state of the combination of man and machine into combined sentience and ascension to the next level. This is how the transhumanists view things. And a lot of it has to do in the early work with this whole brain emulation. Modeling the brain. Mapping the brain. Whole brain emulation represents a formidable engineering and research problem, yet one which appears to have a

[00:19:33] well-defined goal and could, it would seem, be achieved by extrapolations of current technology. This is unlike many other suggested radically transformative technologies that artificial intelligence, where we do not have any clear metric of how far we are from success. I'm going to pause for a moment. But, artificial intelligence, once again, will have a key role in all of this. Now, how accurate is AI?

[00:20:04] In its assessments and its calculations and its predictions, we simply do not know. We simply do not know, but there are a lot of people putting a lot of faith in AI. Now, given these problems we're seeing with hallucinations in the AI models, I would think that's a problematic thing going forward, but they are going full steam ahead with this.

[00:20:33] Saying things like AI will allow us to cure cancer. Did you hear that? Did you hear Larry Ellison speaking in the White House at the press conference with President Trump when they announced this AI project, this $500 billion AI Stargate project? He had said that AI would be able

[00:21:03] to help us develop a vaccine, an individualized mRNA vaccine, and he said this exactly. He made sure to mention they were mRNA vaccines, individually produced ones that can be made within 48 hours, and that they could use AI to detect cancers before they happen with a simple blood test, and then they could formulate a vaccine specified to the individual within 48 hours.

[00:21:32] This is what they're calling the promise of AI. Now, take that for what you will. If that's truly what this will do, that's fantastic. But, think about some of the implications here. Think about the ways in which this could be abused or misused or could go off the rails very quickly. What if this artificial intelligence takes a disliking to the human

[00:22:02] beings? what if it decides that mass genocide is desirable for it? And that's just speaking in terms of what if it goes rogue or becomes sentient in some way? I don't think it's feasible. I don't think artificial intelligence can in any way become sentient or conscious. I could be wrong about that, but I don't see the evidence that any such thing could happen. You see,

[00:22:31] it lacks the divine spark the spirit, the animus that we have. It does not have that, and that is why they want to merge the AI with the human being, because we have that divine spark. AI does not. So in order for the manifestation of spirit to be able to house the gateway that is the human being, it needs that divine spark to work with. And therefore,

[00:23:00] they want to combine artificial intelligence with the human intelligence and create this abomination of nature, this abomination of desolation. And if you have a biblical mindset, maybe you know where I'm going with that. But at any rate, we'll continue on here with this whole brain emulation roadmap. In order to develop ideas about the feasibility of whole brain

[00:23:30] emulation, ground technology, foresight, and stimulate interdisciplinary exchange, the Future of Humanity Institute hosted a workshop on May 26th and 27th, 2007 in Oxford. Invited experts from areas such as computational neuroscience, brain scanning technology, computing, nanotechnology, and neurobiology presented their findings and discussed the possibilities, problems, and milestones that would have to be reached before whole brain

[00:23:59] emulation becomes feasible. The workshop avoided dealing with socioeconomic ramifications and with philosophical issues such as theory of mind, identity, or ethics. While important, such discussions would undoubtedly benefit from a more comprehensive understanding of the brain, and it was this understanding that we wish to focus on furthering during this workshop. Such issues will likely be dealt with in future workshops. I'm going to pause for a moment. So they don't care about the morality or ethics

[00:24:29] of this. They don't care about actual theory of mind. They don't care about the true nature of consciousness. They just want to model this off of this physical material world in which we live, where they could explain away all phenomena, even consciousness as little more than some physical cause and effect sequence. You see, they want to try to make you believe, and this is what their whole premise relies upon,

[00:24:59] that consciousness is nothing more than the byproduct of the electrochemical activity of the brain and brainstem. And if that's the case, it can be duplicated via machine or algorithm. And this, when sufficiently mapped out, can be duplicated, can be backed up in a machine. Your consciousness can reside on, if they are correct with this, which I assure you they are not. You see, they have completely disregarded the spiritual

[00:25:29] component of this. We are spiritual beings having a physical experience here. That, I think, is almost undeniable by anybody who takes the time to dig deeply into the nature of reality. But at any rate, we'll get back to the reading here. This document combines an earlier white paper that was circulated among workshop participants and additions suggested by those

[00:25:59] participants before, during, and after the workshop. It aims at providing a preliminary roadmap for whole brain emulation, sketching out key technologies that would need to be developed or refined, and identifying key problems or uncertainties. Brain emulation is currently only a theoretical technology. I'm going to pause for a moment. That was as of 2008, the writing of this paper, allegedly. This makes it vulnerable to speculation, hand-waving,

[00:26:28] and untestable claims. As proposed by Nick Sabo, falsifiable design is a way of curbing the problems with theoretical technology. And he says, quote, the designers of a theoretical technology in any but the most predictable of areas should identify its assumptions and claims that have not already been tested in a laboratory. They should design not only the technology, but also a map of the uncertainties and edge cases in the design and a series of such experiments and tests

[00:26:58] that would progressively reduce these uncertainties. A proposal that lacks this admission of uncertainties coupled with designs of experiments that will reduce such uncertainties should not be deemed credible for the purposes of any important decision. We might call this requirement a requirement for a falsifiable design. End quote. In the case of brain emulation, this would mean not only sketching how a brain emulator would work, if it could be built and a

[00:27:28] roadmap of technologies needed to implement it, but also a list of the main uncertainties in how it would function in proposed experiments to reduce these uncertainties. It is important to emphasize the long-term and speculative nature of many aspects of this roadmap, which in any case is to be regarded only as a first draft, to be updated, refined, and corrected as better information becomes available. Given the difficulties and uncertainties inherent in this type of work,

[00:27:57] one may ask whether our study is not premature. Our view is that when the stakes are potentially extremely high, it is important to apply the best available methods to try to understand the issue. Even if these methods are relatively weak, it is the best we can do. The alternative would be to turn a blind eye to what could turn out to be a pivotal development without first studying the question how is one to form any well-grounded view one way or the other as to the

[00:28:26] feasibility and proximity of a prospect like whole brain emulation. So I'm going to pause before we continue on. So now they talk about some of the feasible things that would be necessary in order to make this a reality. and as we'll see, many of these things are seemingly snapping into place since 2008 when this was written. So let's

[00:28:56] continue on. And I might skip around a little bit in this paper to get to some of the more important details and we'll connect the dots with some other bits of research that I've done. The concept of brain emulation. Whole brain emulation, often informally called uploading or downloading, has been the subject of much science fiction and also some preliminary studies.

[00:29:25] The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain. Emulation and simulation. Pay close attention here. Why do you think they're trying to convince you we live in a simulation? Well, because if we do, then

[00:29:56] uploading your consciousness to a machine would be little more than just a step sideways into a more desirable reality for you. If they could convince you that's what's going on here. Think about that. Food for thought. But let's get back to it. The term emulation originates in computer science, where it denotes mimicking the function of a program or computer hardware by having its low-level functions simulated by another program. When a simulation mimics the outward results, an

[00:30:25] emulation mimics the internal causal dynamics at some suitable level of description. The emulation is regarded as successful if the emulated system produces the same outward behavior and results as the original, possibly with a speed difference. This is somewhat softer than a strict mathematical definition. According to the Church Turing thesis, a Turing machine can emulate any other Turing machine. The physical Church Turing

[00:30:54] thesis claims that every physically computable function can be computed by a Turing machine. This is the basis for brain emulation. If brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine. Even if true, however, it does not demonstrate that it is a computationally feasible process. Going to pause for a moment. So theoretically, they think they can

[00:31:23] actually copy your brain, make a duplicate of it, and have it function primarily the same way. But there are some doubts about the feasibility of this. In the following, emulation will refer to a one-to-one model where all relevant properties of a system exist, while a simulation will denote a model where only some properties exist. Emulations may behave differently

[00:31:52] from each other or the original due to noise or intrinsic chaos, but behave within the range of what one would expect from the original if it had experienced the same noise or chaos. Going to pause for a moment here. So there may be some distortion involved in this emulation. Interesting, isn't it? Maybe it would act as if it were psychotic or schizophrenic or some such thing because of

[00:32:21] the noise or chaos or the extra data malfunction within the machine. By analogy, with a software emulator, we can say that a brain emulator is software and possibly dedicated non-brain hardware that models the states and functional dynamics of a brain at a relatively fine-grained level of detail. In particular, a mind emulation is a brain emulator that is detailed and correct enough to produce the phenomenological effects

[00:32:50] of a mind. A person emulation is a mind emulation that emulates a particular mind. What the relevant properties are is a crucial issue. In terms of software emulation, this is often the bits stored in memory and how they are processed. A computer emulator may emulate the processor, memory, input, output, and so on of the original computer, but does not simulate the actual electronic workings of

[00:33:19] the components, only their qualitative function on the stored information and its interaction with the outside world. While lower-level emulation of computers may be possible, it would be inefficient and not contribute much to the functions that interest us. Depending on the desired success criterion, emulation may require different levels of detail. It might also use different levels of detail in different parts of the system. In the computer example, emulating the result of a

[00:33:49] mathematical calculation may not require simulating the execution of all operating system calls for math functions, since these can be done more efficiently by the emulating computer's processor. While emulating the behavior of an analog video effect may require a detailed electronics simulation. I'm going to pause for a moment. So now they're talking about some of the complexity of making this feasible. But when you begin to realize that the development of quantum computers

[00:34:19] makes a lot of this a lot more feasible, then you should have some concern. Little need for whole system understanding. Think about this. Now this is part of the hypothesis put forward here. They say there's little need for whole system understanding. They don't need to understand the whole system to control or emulate the whole system, do they? This is what they're saying. Let's pay attention here. An important

[00:34:48] hypothesis for whole brain emulation is that in order to emulate the brain we do not need to understand the whole system, but rather we just need a database containing all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. A functional understanding, why is a particular piece of cortex organized in a certain way, is logically separate from detail knowledge. How is it organized and how does this structure respond

[00:35:18] to signals? Functional understanding may be a possible result from detail knowledge and it may help gather only the relevant information for whole brain emulation, but it is entirely possible that we could acquire full knowledge of the component parts and interactions of the brain without gaining an insight into how these produce, say, consciousness or intelligence. Going to pause for a moment here. So now, like I said, they don't

[00:35:47] really seek to understand consciousness what it really is. They just want to be able to duplicate it and manipulate it and that's what they're talking about. This is taking the cybernetics approach, the whole systems control approach. They don't necessarily need to understand exactly how it works, they just know what works by manipulating the whole in a certain way. And then if you add artificial

[00:36:16] intelligence to this, then you begin to get a whole other can of worms opened here. You see, they don't understand how or why artificial intelligence does the things that it does, but they know it works. Same basic premise here. This really is a concerning thing. If they don't understand how or why it operates, how could they understand the

[00:36:46] safety of uploading your consciousness to a machine? They really can't, can they? They don't know. But let's continue on. Even a database merely containing the complete parts list of the brain, including the morphology of its neurons, the locations, sizes, and types of synaptic connections would be immensely useful for research. It would enable data-driven research in the same way as genomics has done in the field of cell biology.

[00:37:15] Computational neuroscience attempts to understand the brain by making mathematical or software models of neural systems. Currently, the models are usually far simpler than the studied systems, with the exception of some small neural networks, such as the lobster stomogastric gaglion, and the locomotor network of the lamprey spinal cord. And I'm going to pause for a moment, and those were two studies being done at that time between 2002 and 2007, when they were beginning

[00:37:45] to try to map out the brain function and the neuronal function of these animals. And they've done a lot more since then. Often, models involve a combination of simplified parts, simulated neurons and synaptic learning rules, and network structures, subsampling of biological neurons, simple topologies. Such networks can themselves constitute learning or pattern-recognizing systems on their own. Artificial neural

[00:38:15] networks, artificial neural network models, can be used to qualitatively model, explain, and analyze the function of brain systems. Connectionist models build more complex models of cognition or brain function on these simpler parts. The end point of their pursuit would be models that encompass a full understanding of the function of all brain systems. Such qualitative models might not exhibit intelligence or the complexity of human behavior, but would enable

[00:38:44] a formalized understanding of how they come about from the simple parts. Going to pause for a moment. So once again, applying cybernetics methodologies here to understand how do we get performance out of this? You see, they'll model a system and they'll find out what inputs and outputs work and then go off of that rather than trying to intricately understand the actual mechanism

[00:39:13] of action here. Another approach in computational neuroscience involves creating more biologically realistic models where information about the biological details of neurons, such as their electrochemistry, biochemistry, detailed morphology and connectivity are included. At its simplest, we find compartment models of individual neurons and synapses, while more complex models include multiple realistic neurons connected into networks, possibly taking interactions such

[00:39:42] as chemical volume transition into account. This approach can be seen as a quantitative understanding of the brain, aiming for a complete list of the biological parts, chemical, species, neuron, morphologies, receptor type, and distribution, etc. And modeling as accurately as possible the way in which these parts interact. Given this information, increasingly large and complex simulations of neural systems can be created. Whole brain

[00:40:12] emulation represents the logical conclusion of this type of quantitative model, a one-to-one model of brain function. I'm going to pause for a moment. So they want to make an objective measure of brain function. What have I always told you, and this applies especially for the cybernetics model of things, they want to objectively weigh, measure, and count everything. Because if you can quantify

[00:40:40] a thing that gives you some measure of control over it, that's the important thing to understand here. That's why they seek to objectify even these subjective things like consciousness. And they've developed different models to do so. And that's essentially what this is speaking of here. So they have this notion that a lot of it has to do with detailed modeling

[00:41:10] of the human brain and brainstem. So let's get down to the roadmap requirements here. What do they require to do this? This is what was written in 2008. as to what they required to create a brain emulation. Whole brain emulation requires three main capabilities. The ability to physically scan brains in order to acquire the necessary information. The ability to

[00:41:40] interpret the scanned data to build a software model. And the ability to simulate this very large model. These in turn require a number of sub capabilities. Scanning methods require ways of preparing the brains, in particular separation from other tissue, fixation, and possibly dying. There is also a need for methods of physically handling and storing pieces of tissue. Since most scanning methods cannot image

[00:42:09] large volumes, the brains will have to be sectioned into manageable pieces. This must allow corresponding cells and dendrites to be identified on both sides. While fixation and sectioning methods are commonly used in neuroscience, the demands for whole brain emulation are stricter. Much larger volumes must be handled with far less tolerance for damage. Imaging methods are discussed in more detail in the chapter on scanning. The three key issues are achieving the

[00:42:39] necessary resolution to image the smallest systems needed for an emulation, the ability to image not necessarily simultaneously, the entire brain, and the ability to acquire the functionality of relevant information. Translating the data from the imaging process into software requires sophisticated image processing, the ability to interpret the imagery into simulation relevant parameters, and having a computational neuroscience model

[00:43:09] of sufficient precision. The image processing will have to deal with the unavoidable artifacts from scanning such as distortions and noise, as well as an occasional lost data. It will likely include methods of converting direct scan data into more compressed forms, such as traced structures, in order to avoid excessive storage needs. The scan interpretation process makes use of this data to estimate the connectivity and to identify synaptic connections, cell types,

[00:43:39] and simulation parameters. It then places this information in an inventory database for the emulation. These steps are discussed in the image processing chapter here below. I'm going to pause for a moment. So now they're talking about imaging the brain. How do they intend to do this? Well, it's very difficult to get any

[00:44:09] data on a dead brain, brain, isn't it? So this would have to be a live brain. And of course, brain scans, as we know them, have become more sophisticated through the years. They use different materials to be able to get more accurate images of the brain and the brain activity. And many of these methods have become less invasive through the course of time. And of course, the one we're going to get to here.

[00:44:39] A little bit later on, a little down the road, is the most concerning one. And this has to do with nanotechnology. And we'll connect the dots for you there very soon. But let's go ahead and read on. The software model requires both a mathematical model of neural activity and the ways of efficiently implementing such models on computers. Computational neuroscience aims at modeling the behavior of neural entities such as networks, neurons, synapses, and learning

[00:45:09] processes. For whole brain emulation, it needs to have sufficiently good models of all relevant kinds of subsystems, along with the relevant parameters set from scan data in order to construct a computational model of the actual brain that was scanned. To emulate a brain, we need enough computing power to run the basic emulation software, a sufficiently realistic body simulation, and possibly a simulated environment. The key demand are

[00:45:39] for memory storage to hold the information and processor power to run it at a suitable speed. The massive parallelism of the problem will put some significant demands on the internal bandwidth of the computing system. In addition, whole brain emulation likely requires the development of three supporting technology areas with which it has a symbiotic relationship. Pay close attention, folks. First, validation methods to check out that the

[00:46:08] other steps in the procedure are accurate and are based on accurate data and models. This includes validation of scanning, validation of scan interpretation, validation of neuroscience models, validations of implementation, and ways of testing the success of whole brain emulation. While ordinary neuroscience research certainly aims at validation, it does not systemize it or systematize it. For a complex multi-step research

[00:46:38] effort like whole brain emulation, integrated validation is likely necessary to ensure that bad data or methods that do not confuse later setups in the process. Second, whole brain emulation requires significant low-level understanding of neuroscience in order to construct the necessary computational models and scan interpretation methods. This is essentially a continuation and strengthening of systems biology and computational neuroscience aiming at a very complete description

[00:47:07] of the brain on some size or functional scale. Third, whole brain emulation is large scale neuroscience requiring methods of automating neuroscientific information gathering and experimentation. This will reduce costs and increase thought put and is necessary in order to handle the huge volumes of data needed. Large scale industrial neuroscience is clearly relevant for other neuroscience projects too. So I'm going to pause

[00:47:37] there before we continue. So now they need immense amounts of energy and power and computational power to do this. And now do you understand why they're throwing 500 billion dollars into AI research? It has to do with this. More so than I would say some of the other things they're touting like curing cancer or some such thing. They want to understand the human mind.

[00:48:06] They want to be able to emulate the human mind. Duplicate it. So here, the roadmap. Based on these considerations, we can sketch out a roadmap with milestones, required technologies, key uncertainties, and external technology interactions. The approach to whole brain emulation has two phases. The first phase consists of developing the basic capabilities and settling key

[00:48:35] research questions that determine the feasibility, required level of detail, and optimal techniques. This phase mainly involves partial scans, simulations, and integration of the research modalities. The second phase begins once the core methods have been developed and an automated scan interpretation simulate pipeline has been achieved. At this point, the first emulations become possible. If the developed methods prove to be scalable, they can

[00:49:05] then be applied to increasingly complex brains. Here, the main issue is scaling up techniques that have already been proven on the small scale. And this, ladies and gentlemen, is where artificial intelligence comes in. This is why they want it. Because they think artificial intelligence can do this in a very short time, whereas it would take human beings a very, very long time to map out all these details. So here's the key

[00:49:33] milestones of the whole brain emulation roadmap. Ground truth models. A set of cases where the biological ground truth is known and can be compared to scans, interpretations, and simulations in order to determine their accuracy. Determining appropriate level of simulation. This includes determining whether there exists any suitable scale separation in brains. If not, the whole brain emulation effort may be severely limited. And if so, on what level?

[00:50:03] This would then be relevant scale for scanning and simulation. And then thirdly here, full cell simulation. A complete simulation of a cell or similarly complex biological system. While strictly not necessary for whole brain emulation, it would be a test case for large scale simulations. I'm going to pause for a moment. Whole cell simulation. They want to simulate the

[00:50:33] functions of a cell. They want to be able to scan a cell. They've done a lot of work understanding how cells function. And we have an awful lot of models of this. So at a cellular level, this is where they fundamentally want to begin to be able to make an emulation. Next, it says body simulation. Inadequate simulation for the model animal's body and environment.

[00:51:02] Ideally demonstrated by fooling a real animal connected to it. I'm going to pause for a moment here. Fooling a real animal connected to it. So they want to separate out the brain or the mind from the body and create an artificial body and fool the animal with that to make them believe they have a body that they may not.

[00:51:32] And we see a lot of concerning experiments that have been reported from around the world with similar notions to this. I won't go into details here for time's sake, but we have seen major advances towards this goal since 2008. Simulation hardware. Special purpose simulation or emulation computer hardware may be found to be necessary or effective. Organism simulation.

[00:52:02] A simulation of an entire organism in terms of neural control, body state, and environmental interaction. This would be a true emulation since it is not based on any individual but rather known physiological data for the species. This would enable more realistic and individual models as scans, models, and computer power as computer power improves. I'm going to pause for a moment.

[00:52:30] Well, I think they have been able to duplicate some of these processes. We have all of these different experiments, as I alluded to earlier, that have been going on for a long time now that have come to the surface in the news media. All you have to do is go back and look at them. And you'll see demonstrations of stuff like this that have happened since then.

[00:53:00] Next is demonstration of function deduction. Demonstrating that all relevant functional properties on a level can be deduced from scan data. Next, a complete inventory, a complete database of entities at some level of resolution for a neural system, e.g. not just the connectivity of the C. elegans nervous system, but also the electrophysiology of the cells and synapses. This would enable full emulation if all the update rules are known. It demonstrates that

[00:53:30] the scanning and translation methods have matured. And I'm going to pause for a moment and I think since then they have achieved that goal. Speaking of C. elegans, the nervous system, I believe this is some type of a primitive roundworm or some such thing if I remember correctly. But I think if memory serves me they have been able to do a complete inventory on this for the first time just within the past year or two.

[00:53:59] So we've already met some of these goals. Next on their list here, automated pipeline. A system able to produce a simulation based on an input tissue sample going through the scan, interpretation, and simulation steps without major human intervention. The resulting simulation would be based on the particular tissue rather than being a generic model. partial emulation.

[00:54:29] A complete emulation of neural systems such as the retina, invertebrate ganglia, or a V1 circuit based on scanned and interpreted data from a brain rather than species data. This would demonstrate the feasibility of data-driven brain emulation. Next. Eutilitic organism emulation. A complete emulation of a simple organism such as C. elegans or another eutilitic with a fixed

[00:54:57] nervous system organism using data from the pipeline scanning. It may turn out that it is unnecessary to start with a eutilitic organism and the first organism emulation would be a more complex invertebrate. Going to pause for a moment, but I do believe it is exactly this C. elegans that they have achieved this with now. Since the writing of this paper. We'll have to look that up and check into that after the fact.

[00:55:27] Invertebrate whole brain emulation. Emulation of an invertebrate such as a snail or an insect with learning. This would test whether the whole brain emulation approach can produce appropriate behaviors. If the scanned individual was trained before scanning, retention of the trained responses can be checked. Small mammal whole brain emulation. Demonstration of whole brain emulation in mice or rats proving that the approach can handle mammalian neuroanatomy.

[00:55:56] Large mammal whole brain emulation. Demonstration in higher mammals giving further information about how well individuality, memory, and skills are preserved as well as investigation of safety concerns. And then finally human whole brain emulation. Demonstration of an interactive human emulation. These are all things that are necessary in their framework. Now I do believe they are at the scale with this now

[00:56:25] where they are testing on mammals. This may be going on secretly or it may be going on openly. I don't really claim to know but I suspect from things I've seen leaked out into the media that this is going on and we may be at the precipice of human whole brain emulation. And maybe we're already there. And we'll get to that in a moment.

[00:56:55] Computers are developed independently of any emulation goal driven by mass market forces and the need for special high performance hardware. Moore's law and related exponential trends appear likely to continue some distance into the future. And the feedback loops powering them are unlikely to rapidly disappear. There is independent and often sizable investment into computer games, virtual reality, physics simulations, and medical simulations. Like computers, these fields produce their own revenue

[00:57:24] streams and do not require whole brain emulation specific or scientific encouragement. A large number of other technologies such as microscopy, image processing, and computational neuroscience are driven by research and niche applications. This means less funding, more variability of the funding, and dependence on smaller groups developing them. Scanning technologies are tied to how much money there is in research, including brain emulation research, unless

[00:57:53] medical or other applications can be found. Validation techniques are not widely used in neuroscience, yes, but yet, but could and should become standard as systems biology becomes more common and widely applied. So I'm going to pause for a moment. So it says here, scanning technologies are tied to how much money there is in research, unless medical or other applications can be found.

[00:58:24] Well, what else have they been scanning? Think about the whole concept of a vaccine passport. Think about the whole concept of maybe vaccine-induced nanotechnologies. But let's continue. It says, finally, there are a few areas relatively specific to whole brain emulation. Large-scale neuroscience, physical handling of large amounts of tissue blocks, achieving high scanning volumes, measuring functional information from the images,

[00:58:54] automated identification of cell types, synapses, connectivity, and parameters. These areas are the ones that need most support in order to enable whole brain emulation. The latter group is also the hardest to forecast, since it has weak drivers and a small number of researchers. The first group is easier to extrapolate by using current trends with the assumption that they remain unbroken sufficiently far into the future. And I'm going to pause for a moment here. Now, if you put, say, some type of

[00:59:24] a crisis or emergency in place, then maybe you could begin testing on larger models in some such way, if you're catching my drift here. This roadmap is roughly centered on the assumption that scanning technology will be similar to current microscopy developed for large-scale neuroscience, automated sectioning of fixated tissue, and local image-to-model conversion. For reasons discussed in the

[00:59:52] scanning section of this paper, non-destructive scanning of living brains appears to be hard compared to the slice-and-dice approach, where we have various small-scale existence proofs. However, as pointed out by Robert Freitas Jr., nanomedical techniques could possibly enable non-destructive scanning by use of invasive measurement devices. If such devices prove infeasible,

[01:00:20] molecular nanotechnology could likely provide many new scanning methodologies, as well as radical improvement of neuroscientific research methods and the efficiency of many roadmap technologies. Even far more modest developments, such as single-molecule analysis, nanosensors, artificial antibodies, and nanoparticles for imaging, which are expected to be in use by 2015, it says here. I'm going to pause for a moment. They have been in use

[01:00:50] at this point for more than 10 years, ladies and gentlemen, would have an important impact. Hence, early or very successful nanotechnology would offer faster and alternative routes to achieve the roadmap. Analyzing the likelihood, time frame, and abilities of such nanomedicine is outside the scope of this document. I'm going to pause for a moment. Remember, this was written in 2008.

[01:01:18] Now, it says here the time frame they predicted this by was 2015, but I think they started using nanoparticles to map body systems, nanotechnologies, nanomachines, to map out body systems in scans before then. Probably a good two years fully before then, 2013 or so. If memory serves me correctly, I could be wrong. But we have a whole branch

[01:01:47] of medicine called nanomedicine now. There's a lot of research that's been put into this. A lot of research. Now, remember, nanomedical techniques could possibly enable non-destructive scanning by the use of invasive measurement devices. nanotechnology.

[01:02:14] There's one key component to this. As discussed in the overview, whole brain emulation does not assume any need for high-level understanding of the brain or mind. In fact, should such understanding be reached, it is likely that it could be used to produce artificial intelligence. Human-level AI or superintelligent AI would not necessarily preclude whole brain emulation,

[01:02:44] but some of the scientific and economic reasons would vanish, possibly making the field less relevant. On the other hand, powerful AI could greatly accelerate neuroscience advances and perhaps help develop whole brain emulation for other purposes. Conversely, success in some parts of the whole brain emulation endeavor could help AI. AI, for example, if cortical microcircuitry and learning rules could be simulated efficiently as a general learning or behavior system.

[01:03:14] Going to pause for a moment here, folks. Artificial general intelligence developed by mapping the mind, the brain, and vice versa. Artificial intelligence can more effectively map the brain and duplicate these things. That's why they're trying their hardest to get this in place. The impact and development of whole brain emulation will depend on which of the main capabilities

[01:03:43] scanning interpretation simulation develop last. If they develop relatively independently, it would be likely, or it would be, excuse me, unlikely for all three to mature enough to enable human-level emulations at the same time. If computing power is the limiting factor, increasing complex animal emulations are likely to appear. Society has time to adapt to the prospect of human-level whole brain emulation in the near future. If scanning resolution, image interpretation,

[01:04:13] or neural simulation is the limiting factor, a relatively sudden breakthrough is possible. There is enough computing power, scanning technology, and software to go rapidly from simple to complex organisms using relatively small computers and projects. This could lead to a surprise scenario wherein society has little time to consider the possibility of human-level whole brain emulation. If computing power is the limiting factor, or if scanning is the bottleneck due to the lack of throughput,

[01:04:42] then the pace of development would likely become economically determined. If enough investment were made, whole brain emulation could be achieved rapidly. This would place whole brain emulation enablement under political or economic control, to a greater degree than in the alternative scenarios. And I'm going to pause for a moment to tell you that has been the case with this. They know how. They have the technologies. They've begun

[01:05:13] to fill in the gaps since 2008 with this. and now all they lack is the funding and now they're getting the funding. They're going to roll this out, folks. They are going to roll this out. And of course, they're going to attach this to the medical model as we had said earlier. Now, if you pair this with what

[01:05:44] has been done behind closed doors within the military industrial complex's black budget community, then you can begin to understand how far ahead technologically this probably likely is. And now it's just the lack of funding that has gone into developing the AI in order to bring it all online and snap it all together. And now that funding is getting put in place.

[01:06:13] There have likely been trillions of dollars funneled into this already in years past through the secret black budget community. This stuff goes on, these unacknowledged special access programs that go on behind the scenes. Likely funding from DARPA goes into this as well. And when you attach this to the idea

[01:06:43] of various medical agendas that we've seen, you can begin to connect the dots, nanotechnology, mapping the brain, imaging, we can tell a lot of things about this. And this brings me to the next thing that came onto my radar. This comes from a paper released Friday in the International Journal of

[01:07:13] Vaccine Theory, Practice, and Research, a peer reviewed scientific periodical. But this periodical, a lot of people are skeptical of. Let's put that out there at the beginning of this. I've had people tell me, oh, that's not legitimate. That's not a legitimate scientific journal. It is. It's just an alternative scientific journal. You see, these are the people that won't

[01:07:42] get funded or actually printed in some of these other scientific journals. The whole peer review process is compromised, and that's the problem here. But this is a peer reviewed journal. It's an independent one that is funded by those people, citizens, who want the truth. That's why they don't like this. the academic world does not like this journal

[01:08:11] because these people do actual studies and experiments and find things that doesn't fit neatly into the criteria of what the mainstream narrative would tell you. And this paper, it just was released here Friday, was largely compiled in October of 2024. And the title of this is At Least 55 Undeclared Chemical Elements Found in COVID-19 Vaccines from

[01:08:40] AstraZeneca, CanSino, Moderna, Pfizer, Sinopharm, and Sputnik 5, with precise ICP-MS. And ICP-MS is the testing, the microscopy they use to find these things. And we'll read you the abstract and we'll go into a little bit of details here about this. And I will connect some more dots for you. You may have heard things in the past about some

[01:09:10] other things that have come forward by scientists who independently studied these COVID vaccines. And this tells you a lot of what you need to know. Abstract. The experimental vaccines supposedly invented to combat COVID-19 were coercively forced upon the global population beginning in late 2020. They have precipitated innumerable and varied disease conditions ranging

[01:09:39] from mild to lethal. This increase in health disorders and sudden deaths began to manifest concomitantly with the number of people inoculated and doses administered per person. By the end of 2023, 24 undeclared chemical elements had been detected by scanning electron microscopy coupled with energy dispersive x-ray spectroscopy in the COVID-19 vaccines of different brands by various research groups from different countries around the world.

[01:10:08] In this paper, we report laboratory results from high precision inductively coupled plasma mass spectrometry that confirm and expand previous results by the scanning electron microscope coupled with energy dispersive x-ray spectroscopy. To this end, the contents of vials from different lots of the brands from AstraZeneca from Oxford, CanSino Biologics, Pfizer, BioNTech,

[01:10:38] Sinopharm, Moderna, and Sputnik 5 were analyzed. Among the undeclared chemical elements, 12 of the 15 cytotoxic lanthanides used in electronic devices and optogenetics were detected. Going to pause for a moment. Lanthanides are rare earth metals. And they found 15 of these rare earth metals which are used in electronic devices

[01:11:06] and for the science of optogenetics which you may have heard me refer to here before. 15 were found and 12 of those elements are cytotoxic. In addition, among the undeclared elements were all 11 of the heavy metals. Chromium was found in 100% of the samples, arsenic in 82%, nickel in 59%, cobalt and copper 47%, tin 41%,

[01:11:34] thallium 24%, cadmium, lead, and manganese in 18%, and mercury in 6%. A total of 55 undeclared chemical elements were found with this mass spectroscopy. Combining these findings with the results from the other scans that were done that found these other ones, altogether 62 undeclared chemical elements have been found in the various products. In all brands, we found boron,

[01:12:04] calcium, titanium, arsenic, nickel, chromium, copper, gallium, strontium, pay close attention to this next one folks because you heard it here first, niobium, molybdenum, barium, and hafnium. With this scanning, they found that the content of these samples is heterogeneous. The elemental composition varies in different aliquots extracted from the same vial.

[01:12:34] So what does this mean? Now there is one substance that has been found now in these various lots of these various COVID vaccines which nobody has been talking about and it is massively important and I pointed this element out back in my 2017 book The Alchemical Tech Revolution. My very first book, that element is niobium.

[01:13:04] I'm going to read a little excerpt from my book about niobium in a moment here. But let's go ahead and get to the conclusions drawn by this paper wherein they found these various undocumented elements within these vaccines. Here's the conclusion of the paper. Now I'm going to add a link in the show notes to the PDF of this paper. You can go in and read it for yourself.

[01:13:34] They acknowledge the existence of nanotechnology in these COVID vaccines. Many people have pointed out the graphene in them many people have said that they have found graphene in them while there's other things in there too that are undeclared components of this. And this leads to quite a bit of speculation as to what's the true nature of these things. Well here's the conclusions of this paper. Based on the identification and ranges of the

[01:14:04] quantities of the chemical elements discovered, and on the physical and chemical characteristics of the contents of the vaccines studied, it is of utmost importance to highlight the great similarity that exists between the products of the different brands. The observed differences in chemical elements found in the different brands, we believe, are due to the time lapse between drawing of samples on account of the changing structure of the self-assembling entities in the fluids contained in the vials. We do not believe the

[01:14:34] observed differences are because of manufacturing processes specific to any given brand or to differences between lots because of stochastic variations in the production processes. Despite the small size and few samples analyzed in this exploratory study, we believe that analysis of a large number of samples and lots will confirm the trends we have pointed out. We believe that the various and diverse pathologies in the inoculated population are not due to fortuitous problems in manufacturing or

[01:15:04] distribution, but rather to the technology that seems to be common to all these products which appear to be universally harmful to humans. And they acknowledge the existence of self-assembling nanostructures in these vaccines. This has been validated by several different research labs around the world and you don't hear about it in the mainstream public. what are these nanostructures? Graphene is an important element thereof.

[01:15:34] Also, niobium, you won't hear anybody talk about this. Nobody has caught this. It's been on my radar for a very long time. We'll get to that in a moment. But those are the conclusions of this paper. And we'll begin to connect the dots for you a little bit more here as we get into this. So now we have the acknowledgement of the existence of these materials that are undisclosed in these vaccines and we have

[01:16:02] the acknowledgement of the self-assembling nanotechnologies present therein. Well, what's their true purpose? And what's important about this specific element that I named here, niobium, that they found there? Like I said, ladies and gentlemen, nobody else. I have heard nobody else talking about this. Nobody. So, let's get to it. What is

[01:16:32] niobium? Why is it an integral component of the transmutation of man to the posthuman? What are its properties? Niobium is element number 41 on the periodic table. Its chemical symbol is NB. It's a soft gray ductile transition metal often found in mineral deposits of pyrochlor and columbite. It was formerly called columbium until 1949. It's a relatively inexpensive metal with many practical uses. Niobium is a

[01:17:01] heavily versatile element used in all of the various alchemical technologies that I've listed in this book and that I've identified. It has a cubic crystalline structure and it's a superconductor. It also has the greatest magnetic penetration depth of any known element. Now keep that in mind. This also is massively important. Additionally, it is used to strengthen other metals when compounded with them, primarily stainless steel and titanium.

[01:17:31] It is the 34th most common element in the Earth's crust. Niobium is also physiologically inert and hypoallergenic, making it ideal for biological use. Nanotech niobium compounds are the most promising materials materials for the use in biological tissue. They're used for prosthetic devices like pacemakers and other implantable devices. The fact that niobium is also superconductive and paramagnetic opens up some really important

[01:17:59] possibilities for use in the body for the elite to exploit it in their quest for the technological singularity. Niobium is also used in aircraft and spacecraft, particle accelerators, quantum computers, the niobium in the computer chips is where the quantum effects happen in the quantum computer, nuclear reactors, optical technology and optogenetics, nanopolymers and many other advanced technologies. It is currently being

[01:18:29] heavily researched for medical uses and has been successfully aerosolized to make it into an efficient and cost effective self assembling nanotechnology with many uses. Niobium has been found and identified in chemtrail debris. Think about this. What do we know about this? Studies have shown that inhaled niobium nanoparticles tend to collect in the lungs. In the lungs.

[01:18:59] Do you know anybody that has lung nodules? I bet you do. Respiratory infections, asthma and chronic pulmonary and vascular diseases are up exponentially. People are discovering that unusual unidentified spots or nodules are showing up in their lungs when they're getting x-rays and CT scans. This is becoming far more common and ever since the rollout of the COVID scam, even more so. Remember, I wrote this book back in 2017. 2017.

[01:19:30] The EPA seems to think it's possible that aerosolized nanoparticles could be causing respiratory problems. Niobium nanotechnology may be the solution to the conundrum that they have here with linking man to the machine. Niobium's relatively cheap. Could be used to manufacture nanocomputer networks. Need to have a material

[01:19:59] that is capable of penetrating into living brain tissue to give us some fine scanning capabilities. They can do this with optogenetics and with this substance being as highly paramagnetic as it is. It would be very useful in mapping the brain, wouldn't it? The possibility

[01:20:28] of nanotechniobium compounds being used to make the jump from transhumanism or to transhumanism seems to be likely very probable. faster, cheaper, smaller computers that can bond with biological tissue are precisely what these technocrats are looking for in order to tie the minds of men to the big machine here. With all of its potential uses, niobium could possibly be the key to the transhumanist dream. And then,

[01:20:59] if you want to get into the mythology of it, I mention that in my book because this is named after a Greek figure in mythology named Niobium. But we won't get into that aspect of things. I just wanted to keep it strictly on the fundamental level here for folks to understand. We have nanotechnologies that have been injected

[01:21:29] and or somehow assimilated into the human being. They're there. They exist. They've been documented by people looking for them. Niobium. Niobium's a key component here because it is inert in biological tissue. It doesn't cause inflammation. Your body doesn't recognize it as a threat and reject it. But when mixed with other things, it does cause inflammation. But this is a perfect Trojan horse.

[01:21:58] And these types of ideas are probably a bridge too far for most people. but essentially it's all about attaching you to the Internet of Things. Why do you think they've been really upgrading the Internet infrastructure, the speed of the networks, the different frequency bands being used, and the implementation of all these different technologies? And now

[01:22:28] the only missing component is the AI to compile the data and duplicate it in a massive enough simulation. That's all they're missing. And this is largely due to financial factors. You see, they can only sneak so much money into a program in the black budget community for so long before people begin to bark about it. And we have

[01:22:58] all of this stuff going on with the UAP phenomenon in the background too, and the disclosures being made there about secret technologies. A lot of this stuff is really coming to the forefront now. So now they've got to be a little more public with this. So they're going ahead and they're pumping money into the development of an AI infrastructure. Well, what is this for? This is essentially to collate all the data they are collecting from the human being, and especially the human

[01:23:28] brain, from these various sensors, nanosensors, that they've put there. it's the beast system, ladies and gentlemen, being built, if you want to get to the brass tacks of all of it. And like I said, we'll see how this is received. This is a very crunchy topic, and for some people it's a bridge too far. But I assure you, go and look, you'll be shocked at what you find.

[01:23:58] These technologies that we're talking about here, they do exist. They're certainly out there, and we're seeing in the public view, in the public media, in the mainstream media, every day, more and more disclosures that these COVID vaccines were not on the up and up. What were they really for? They've been extremely harmful to humanity, and people don't want to admit that, at least not in the mainstream, they don't.

[01:24:30] But this is what it's about. We'll see if this actually gets censored or not. I intend to put it out on all of the different platforms that I use, and we'll see what happens. Like I said, this is a very crunchy topic. They don't like a lot of this. This would get flagged immediately

[01:24:58] in past weeks as medical misinformation. And thrown out. But we'll see what happens now. We're told that that's a thing of the past now. That we have our free speech. And just to close it out, I'm going to play an audio clip that emerged recently on TikTok. Now this is allegedly

[01:25:28] a clip of Elon Musk talking to Joe Rogan on Joe Rogan's podcast program. But many people have said that this is actually an artificial intelligence deep fake that's been put out there. And that this is not legitimate. Well, here's the thing. Full disclosure here. It doesn't matter if this was legitimate or real or not. There's a very good reason why this has become

[01:25:58] circulated on the internet as it is. It's there on purpose. It's not there by accident. We're going to play this clip. Now, I doubt the truthfulness of this, but I think there is an air of truth in there as to what the eventual intention is behind what's being said here. I don't think it is manifest in the way that it is said here, but you can see the intention, and this was put out there

[01:26:28] for a reason. regardless of whether this is truly Elon Musk talking or not, and I don't think it is. I think this is just some type of an AI production. But here it is. We'll play the clip. I'll let you come to your own conclusions about it. How people say life doesn't feel real ever since 2020. Well, look, it's because it isn't for the decades leading up to 2020, there was something going on that the general

[01:26:58] public never could have expected. Certain aspects of our technology weren't only just providing services for us, they were planning a dramatic shift of consciousness. Google Earth for the last 25 years has been creating a completely digitized three-dimensional blueprint of the entire planet, a perfect database of 3D tracked images of what our external world looks like. While all of our devices that were inside the home, riddled with cameras, have been slowly building back-end digitized replicas of what our households look and feel like,

[01:27:27] silently, siphoned visual information was being harnessed to build a massive digitized replica of our world inside and out, a kind of complex virtual reality that perfectly mimics the real one. Our digital footprints were used to create hyper-realistic digital avatars in this world that look and feel just like us. This virtual reality was being prepared for us to step into... Okay, wait, even if they made this digital replica of the Earth, how would they put us in there

[01:27:57] riddled with high-powered neurotransmitters, which, with properly pressurized electromagnetic field disbursement, could be used to target the consciousness in every individual within range and essentially absorb the consciousness out of the body and relocate it theoretically even into a digital environment.

[01:28:37] So there you go. Like I said, whether that is legitimately Elon Musk speaking or not, that's irrelevant. They circulated this out on the internet for a reason. And is it true what's being represented here? I don't think it's true in how he described it. I don't think this happened in 2020, but certainly this is part of their plans. And this could be used as an explanation for something like the

[01:29:07] rapture, or a false rapture. I make those speculations in my very first book from which I was reading. And it's food for thought for all of you out there. These technologies certainly exist. They're being pushed and promoted. They're trying to build the AI infrastructure for this right now. And we're seeing unprecedented things happening in the world. We're going to see high strangeness coming down the pike here in the

[01:29:36] very near future. So keep that in mind. So anyway, that is all the time we have for tonight. I hope you found this informative and educational. And I hope you go out there, look into these things for yourselves. You'll be shocked at what you find. And keep that in mind. And I want to thank you all for tuning in. And I'll remind you, I appreciate each and every one of you. We'll catch you next time. Have a good one now. We lead the world in facing down a threat to decency and

[01:30:06] humanity. What is at stake is more than a wonderful country. It is a big idea. A new world world world world