Rocksolid Light

News from da outaworlds

mail  files  register  groups  login

Message-ID:  

BOFH excuse #362: Plasma conduit breach


sci / sci.engr / Hypothesis-generating software, marionette arms, and epigenetics research.

Subject: Hypothesis-generating software, marionette arms, and epigenetics research.
From: Joe Mardin
Newsgroups: sci.engr
Date: Sun, 23 Jul 2023 13:09 UTC
X-Received: by 2002:a05:6214:a61:b0:635:dddc:1779 with SMTP id ef1-20020a0562140a6100b00635dddc1779mr20248qvb.7.1690117789247;
Sun, 23 Jul 2023 06:09:49 -0700 (PDT)
X-Received: by 2002:a05:6870:e281:b0:1b3:cc2f:4b94 with SMTP id
v1-20020a056870e28100b001b3cc2f4b94mr8161639oad.5.1690117788605; Sun, 23 Jul
2023 06:09:48 -0700 (PDT)
Path: eternal-september.org!news.eternal-september.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: sci.engr
Date: Sun, 23 Jul 2023 06:09:48 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=196.75.11.23; posting-account=bKZ6iwoAAABc4xT_0u5c64J0Y8KamxTe
NNTP-Posting-Host: 196.75.11.23
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <6d70cacb-d7ad-4388-98d8-f6b3db15caeen@googlegroups.com>
Subject: Hypothesis-generating software, marionette arms, and epigenetics research.
From: joemardin5@gmail.com (Joe Mardin)
Injection-Date: Sun, 23 Jul 2023 13:09:49 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
View all headers

All the technologies as well as other ideas I, Treon Verdery, think of are public domain.

Longevity technology: Hordenine is a MAO-B inhibitor, thus possibly it has longevity effects like deprenyl. One thing online says, “[at hordenine] dimethylated version of Tyramine (the amino acid). Commonly found in herbal supplements, it is used as an effective MAO-B inhibitor’ Some organisms naturaylly produce hordenine, “has also been detected in some algae and fungi” Copying the hordenine producing gene or heightening the activity of a methylating enzyme on tyramine seems likely to less difficult/easier than producing an entire deprenyl small molecule. So screening hordenine and molecular variants as a longevity drug makes sense.

Hordenine effects “taar” like phenylethylamine (PEA) so might block/amplify PEA effects. “In a study of the effects of a large number of compounds on a rat trace amine receptor (rTAR1) expressed in HEK 293 cells, hordenine, at a concentration of 1 μM, had almost identical potency to that of the same concentration of β-phenethylamine in stimulating cAMP production through the rTAR1. The potency of tyramine in this receptor preparation was slightly higher than that of hordenine.”

Hordenine could reduce harm to plants or crops from insects, “Hordenine has been found to act as a feeding deterrent to grasshoppers (Melanoplus bivittatus),[38] and to caterpillars of Heliothis virescens and Heliothis subflexa; the estimated concentration of hordenine that reduced feeding duration to 50% of control was 0.4M for H. virescens and 0.08M for H. subflexa.[39]” suggesting possible value to genetically engineering it into leaves and awns.

math entertainment “in essence, differentiable learning involves a single, enormous mathematical function--a much more sophisticated version of a high school calculus equation--arranged as a neural network, with each component of the network feeding information forward and backward” https://www.eurekalert.org/pub_releases/2019-04/hms-fr041619.php makes me think of a math function with simultaneous action, and the “feeding information forward and backward” part being like, or suggesting that: also: just as some equations might use a minimal definition of time, there could be different minimal definitions of time, and some of these versions of time could have different effects and intrinsic forms; so like a matrix that updates from the edge toward the center might have different step number than a linear raster scan; and some topologies of time style are non-reducible to (arrangements of linear rays). Then there is my perception of how some equations have instantaneous solutions, while others have iterable solutions; iterations, and iteration style reminds me of different styles of time; At different nonreducible styles of time some math things could be described, or perhaps solved, with just one, just two, or several, iterations that use a different style of time. Like x=y^3 is instantly solved as a graph, or can be iterated to find values as a plug-in function; So at a different shape or style of time x=y^3 could be iterated using the novel time form/style twice or 7 times to produce a solved system, like my perception of that equation being fully present/solved as just a visual graph..

It is possible to see how different styles of time, iterate at a different quantity of events to produce a math value. Reminds me of a wavelet being a wave representation/solving a wave in less chunks than a fourier transform. So there could be some new computer science in the realm of new styles of time that iterate equations and programs differently. Also, they could look to physics to find naturally occuring time styles that are novel or new to computer science and math.

Perhaps parsimonious descriptions of these different styles of time could be made, and then physics looked at to see if some things like photons or particles have different parsimoniously stated ideas or styles of time. Then there could be math-thought and actual equations about what it means when two kinds or styles of time overlap, or meet each other. Sort of like two fourier waves can meet each other, or a fourier can meet a wavelet, or a fourier can meet a square wave: parsimoniusly stated and different time styles could meet each other, like the very predictable, linear meets parallel update of a grid or matrix like #, or better, as part of time style have the plural object or variable system expand from center, or concentrically narrow from perimeter; There are many possible different parsimonious time styles that could have mathematical development as well as be searched for at actual physics: also the minimum iteration topology/shape definition that causes a turing tape machine or a 1,1,0 automata sequence to advance and “do” could be another style of time. A new time more-than-sequence/geometry style one-fold topology like a mobius strip could contain a novel time style, much less complex than a time style-topology as well as topological space of a turing machine. SO there could be all these math-like definitions of topologies of time that are non-reducible to each other, that is they are fundamental components, and then they could study what happens when these styles of time meet each other; They would also look at observed physics to see if any actual things have mutually observed and potentially also mathematically exclusive time styles.

Things with mutually exclusive time styles might be definitionally unable to interact, reminds me of a new technological thing like the light cone yet possibly baseable in a much wider range of actual physics materials. Thermal or electrical, integrated circuit at computer, insulators. You could make something that was noneraseable even with linear ray vector time, which the 20th century AD seemed to think was usual. a novel kind of permanance.. Handy at nanotechnology for longevity/system maintenance to identicality.
Likewise you could make something that was more erasable at a wider number of parsimoniously different time styles. That could have novel entropy effects, and just being a bit literal, could possibly cause higher energy density at chemical systems with multi time-style easy erasability.

What is the effect of new styles of time at Claude Shannon’s things like highest effective error codes/error reduction codes when a parsimoniously defined time style goes beyond the interator or vector ray version (just one of a plurality of possible time styles) that people used to think about chronoprogression and or/compuiteration and/or instantaneous solution of a function, when though about with Shannon’s intellectual content..

Thinking Shannon, new math based on time styles could be possible that effects computers and the structure of knowledge like probability. Updating probability could improve scientific research and engineering.

Different parsimonious time styles could possibly effect relativity, making groovy new technological relativity shapes, as relativity equations might work differently on time styles comparing linear ray, than on a concentric modification matrix (possibly the time form of passing a sphere through a sheet), or a minimal time motioning topology of a 1,1,0 automata, and the nifty idea applying relativity to the time styles of waving back and forth, tide-like that is a parsimonious time style (and might be physically at the delayed quantum choice eraser)

Basically a version or time style (possibly mathematically bigger definition, perhaps a little like the thing/hole pair of phonon math which has back and forth described in one equation) sequence, like two parallel track bidirectional sequence, or one track that can go back and forth) (phonon math version) where back and forth (slish slosh) is a normal, equation-described part of that time object definition. Just as an aside, time reversability at things like delayed quantum choice eraser, could be described mathematically with equations to make slish-slosh time object description (found in physics/nature), as a technological application then that slish-slosh math object could be applied to neural network node topologies to deposit/accumulate/form/create object-value at their node composition.

I have no idea what a slish-slosh parsimonious time style or other parsimonius new time styles would mean to relativity, or the technology chapes that can be made with relativity, (among them observer-frame simulaneity and non-simultaneity coexisting)

Slish-slosh time style makes me think of neural network nodes being updated, as a particular object form at a time style, “arranged as a neural network, with each component of the network feeding information forward and backward” Basically a version of time or sequence where back and forth (slish slosh) is a normal, equation-described part of that time style object definition. Could a slish-slosh time style math improve the math and function of neural networks? Being extreme, could actual different observed physics of different time-styles, note and create value at, natural slish-slosh systems being used at new technology of neural network computers, and possibly integrated circuit matter compositions; during 2019 AD, at photonics the delayed quantum choice eraser could easily be modified to be an oscillating precausal system that slish sloshes data at a neural network node geometry/computer program.

Just as an aside, time reversability at things like delayed quantum choice eraser, could be described mathematically with equations to make slish-slosh time style object description (found in physics/nature), this could heighten efficiency of computer software as well as the computer-actually-made-of matter, that slish-slosh math style object could be applied to neural network node topologies to deposit/accumulate/form/create node-object-value at their node composition. Basically it could make nerual networks, better, faster, more descriptive of the task, as well as more mathematically and topologically diverse in a way that supports applications, notably as a technology as well as a nifty math thing.

What if it was possible to develop a better, more-changing-at-each-“limned function”-occurence-of-activity, form of time style from the existing math of slish-slosh? Something better than the first, initially figured out, and equationally described, slish-slosh?

Along with the awesome math, Slish-slosh2 could be a new kind of computer clock (perhaps replace the word “clock” there with “active chronotopology”) Also, a new way to address the actual physical things on integrated circuits that would produce more time goodiness out of the possibly measurable energy input that becomes/instantiates the new form of time style used to chronotopology-drive (formerly “clock”) the knowledge modifying and generating technology, MGT (previous word for MGTis kind of: computer, but”computer” might be a word that previously impliedmath forms with ray-like time style sequence of solving.) or put the previous way: The new time styles produce technologizable actual things that replace computer clocks and architecture; these actual things make the computers do the equivalent of solving the previously unsolvable, going faster, or being more reliable (Shannon), loading up neural network nodes with cheap identity production/reweighting. among various possible improvements)

Incidentally, looking at probability, as a entertainment phenomenon along with the usual emphasis on prediction: What does it mean if a function perfectly describes a data set? One version or view is that that a raster-like bounded data image, perfectly stated as a function (there’s that function I saw on youtube that draws any raster image) could have predictive irrelevance, unless there was a “position” or “environment” operator that showed which way the data would drift if some equation were applied to it, as an instant function, or time applied tothe drift, or a new time-styleform applied to the fully-predictive equation (like the raster one).

So just thinking about it, “What does it mean if a function perfectly describes a data set?”, if, big if, the data set is based on observed things like science, It could mean that the function has some non-zero probability of having “leading information” as to what actual thing the dataset is!

You might be able to have probability above chance ((at a finite, or with sufficient math, perhaps nonfinite) at a list of possible things something could be) like something is more likely to be physically sourced in time series data if it had a poisson distribution, or it might be able to speculate that something with a normal distribution was made up of many samples of something that could be multiply recombined.
So, a function that defines a dataset in its entireity, at least with numbers derived from measurement, might contain clues as to what the dataset is. Now I perceive there are an arbitrary nonfinite number of equations that will describe any finite dataset, so the idea of winnowing out “what it might actually be” from chance seems peculiar, but if you make a big list of every equation that has previously been used to predict something at the scientific literature, and then you use that short list of 1 million (or more) equations to see if any of them are partially, or quantifiably partiually expressed, like among the million stats models to screen the data-containing function for, a poisson distribution might be found ob being “leading information” of the function that precisely produces all the actual data.

Then you could do an experiment, getting more data (than that described at the all-data-descriptive function), to verify or refute that the poisson distribution was actually a valid statistical shape, or possible methodology, at the one-function data. Poisson treats things that change over chronointerval. So you could do something non-time dependent, that does not change chronointerval, and still see if it changed the data in a way that reinforced or supported, or even refuted the previous “fit” of thinking the data was a poisson distribution.

So this idea that you might be able to find, or dredge up, something (like a numeric relation, possibly a statistics model), that at a previous million scientific publications, had predictive value somewhere sometime, and systematically find it in an equation or function that precisely, even like raster-equation, produces all the fixed and actual (measured) data could be possible.

This compare it to the big set of all previous valid models, ever, thing: Brings up the idea of something new to me, which is that there are big human knowledge sets you can compare everything to, then do instantaneous math to solve. The possible efficiency or new math of value here is that when you solve for two equations (Function one: function might raster-scan fully describing all data. Function two: a big pile of math instantaneously represents all previously useful statistical math, like T value equation, Z value equation, poisson distribution; a million others)
This overlaying (solving) of the two functions can be instantaneous, where dolving the two instantaneous equations is morer rapid than something that iterates; that makes it more rapidly solvable with computers. Also, the generation of fresh equations comparing the stats/numeric methods “periodic table like thing”could have optimizable completely new equations that describe things, or computer algorithms if the mathematician or programmer felt like doing the new knowledge generation; (those new equations improves velocity of computation and likely extends capability as well)

The benefit here is that a person has some data, then on screening the 1 million stats/numeric methods “periodic table” this program tells them what its about: like, “periodically repeated poisson and log distribution found” often describes repeated cycles of (perhaps seasonal or daily growth). Perhaps an improved piece of software suggests that the data, on periodic-table-style screening could suggest to the human that the driver and source of the data might be a time of day at a sytem with moving components.

Unbeknownst to the software or the human, the software might have been looking at trolley ridership. The program found drivers and numeric methods sources without referring to the actual matter or energy composition of the thing being analyzed.

If any of the actual existing matter-based structure is available to the human, then of course the human can look at that, devise an experiment, and indeed change the trolley fare on time of day, to see if it affects ridership. The thing is that using a million previously predictive equation set (periodic table function comparison system), these kinds of things like suggesting trolley passes, or rather adjustments to the equation with a predicted mechanism that is equivalent to introducing a trolley pass, can be done and tested without a human-generated hypothesis.

The software could find and describe areas of the overlapping (function 1 function 2) equations, where slight changes are most likely to have large measurable effects. The software could suggest: “find an item which can be rescheduled, if it changes the poisson-distribution-like-character of the fully-describing raster function then the poisson description is more likely to be an actual driver.”

Really impressive software could come up with winnowing the ease of experiment, that is suggesting the easist possible change that validates or refutes a periodic-table-of-one-million-numeric-methods numerically autogenerated or the fresh human (or various AI) generated hypothesis.

Better software could winnow tests of these hypothesis with suggesting experiments that are the equivalent to giving out complimentary bus passes and finding out how the new data changes, as compared with the more effortful rescheduling of trolleys, or the effortful moving of entire trolley stations; any of these experiments would modify what appeared to the software to be a poisson distribution, so with that modification the idea that a poisson distribution is a descriptor/driver of the phenomenon issupported.

The software has suggested things used to verify the poisson (chronological interval sequenced) distribution being an authentic attribute of the system. So the result of the winnowing and experiment would be a new system predictive equation, that has generated its own deductive “looks like an actual thing” component, by finding a plausible and successfully tested deductive-able from the periodic-table of numeric methods list that got simultaneous-function solved. Note humans and AI can of course think of complely new things outside and/or possibly addable to the numeric methods periodic table. It is just that this is software that finds both standout bases for deduction as well as generates more accurately predictive equations.

(more notes that came before some of the previous writing)

(about the library of a million or more previously published numeric relationships) It is primitive thinking on my part, but it reminds me of the periodic table. You can scan any physical object against it and get a match, and that match (elemental composition) then predicts what the thing might do, and what it can be turned into. So layering an equation, possibly a function, that happens to produce a poisson distribution onto a periodic table of 1 million statistics models) makes it say, since it is an observation of an actual thing it might be: “poisson distribution”. Then like an element on the periodic table, “poisson distribution” means, that, besides testing new hypothesis and/or validating a model’s equations:

You can do things to it to get predictable, possibly technologizable effects or experimentally predictable effects just like you could with an element: like halve sampling rate yet still have numeric adequacy to make p<.01 predictions; or, on finding that a normal distribution was present, that you could double sampling rate to measure a new value farther out along a standard deviation value, like a really high number or a really high spectral emissions band event. So a variation on the million numeric method thing that finds sources and drivers of actual data, including new synthetic deduced thingables, could be used to suggest different measurements that that improve phenomena investigation, basically kind of like this suggests better optics: see rarer events, see bigger or smaller events, possibly even trace event contingencies with an experiment/new measurement on the system. That has technological applications as well, It is kind of obvious to a person, but if you say, “measure twice as many leaves, you are likely to find a new biggest one” and you want to find a plant with big leaves so you can breed it to be a better agricultural crop, then the software suggesting doubled frequency of observation has a cognitively comprehensible technology benefit.

Also the math of a static function descriptor of the data (there are much better ones than the raster equation as a function that can describe an entire data set, but it was mentioned); when overlain, or instantly-function computed aka equation solved at a million element instantly solveable, definitional eqaution, (is it possible to make a really big equation that simultaneously contains many numeric method equations in it?) is possibly much faster than some other methods.

That high instant solvability version of the equations, among them the one that generates all the actual data, could make it so some questions about the data, and/or the source/structure/drivers of the data are computable as an effect of the overlaying or simultaneous solution of two equations; that is it theoretically instant, as compared with the duration of iteration at a computer-software derived solution.

I read that a deep learning AI can do protein folding calculations a million times faster than some previous numeric method. https://www.eurekalert.org/pub_releases/2019-04/hms-fr041619.php

It is possible a different version of the “million published numeric methods” periodic table-like function could actually be an AI, neural network, or deep learning object created thing. I perceive there is research into instant-identity solvable equations (possibly differential equations) that represent, possibly deep learning (to name one approach), math function an AI has trained into being at a neural network. So, perhaps the incremental chronological-moment utilizing, (similar to: amount of time at a linear geometry; other geometries would have different time-styles) effect of using a computer on data, where an AI studies the data, could be gotten around or speeded up with the math function workalike of the AI’s neural network.

So, basically there’s this thing, where if you have a million numeric-method thing that is like a periodic table, and you find matches on it, and then the software suggests an experiment to verify that the numeric method is predictive/meaningfully descriptive, and furthermore the numeric method might be correlated, linked, or perhaps 1:1 linked, that is directive, of a physics/chemistry/biology sourcing form. So how can this be used? As described at a Quora answer I wrote responding to something like “if correlation does not imply causation, what does it imply?” I found applications from finding steering constellations, called marionettes, among them possibly completely new marionettes. One marionette might be social wealth being upstream of child raising style and government spending, which is actually one mathematically findable big part (one arm of the marionette [+]) that finds a basis for the “implausible absent further upstream hypothesis” correlation of hanging with building space stations.

Uses: so finding new arms of new marionettes creates new areas of chemistry, biology, genetics, social media, communications, quantitative psychology, and even sociology that can be used to describe, and make testable experiments on, things that effect human existence and well being.

During 2019 AD a human mind mind think,”does air pollution have an epigenetic effect?” then devises an experiment.

This numeric method matching software traces a plurality of effects at the big instant equation of everything that’s been published, then at least some of these, perhaps 1 per thousand (period-table like objects) could be testable as to whether they had epigenetic components, that are upstream from the previously described math thing that is where you find an upstream source (marionette, marionette arm) to generate a hypothesis testable relationship from examining a bunch of correlations, notably some which to a human would seem imlausible. restated: the software finds correlations that may then be hypothesis tested as to whether they have an epigeneitic component.

The software then produces a list of 1000 or more new human-unconsidered subjects that have pre-experiment math support for being epigenetically controlled (the math at the published papers looks a lot like the math that crops up at epigenetic systems).

Then if the software is really effective, it finds those (testable for epigenetics math and topic objects) where the output of the equations has the highest generated change with the smallest variable change. This finding of sensitive spots, at a vast number of different objects, facilitates easy robust experimental measurements; that then creates a littler list of possibly epigeneticslly driven things amenable to really easy, possibly fast, or few-sample experiments that the software numerically thinks will provide large magnitudes of difference at their measured results.

So, instead of one human, grabbing one idea out of their mind, and testing it, the software comes up with a (repeatedly concentrated) list of say 100 things likely to have epigenetic-contributions, or even effect-directing contributions, and then lists these epigenetic testable hypothesis out: Then a human looks through them to find ones of interest to humans, like chocolate milk has epigenetic effects on the, 2019 majority, of elementary school students that drink it. (the software figured out school lunches were easy to test/modify, and might have figured out it is easy to change them if that change is beneficial)

Another epigenetic thing the softwaremight list as being the kind of thing that could be experimented on to produce knowledge with high numeric validity support is: “working two part time jobs has measurable epigenetic difference from one full time job, both have the same epigenetic change direction that is away from the epigenetic effects of unemployment”, Then of course the software could find things that cause the same epigenetic effect (actual like proteome and other -ome effects) as that form of renumerated labor that has the highest participant satisfaction.

That new non-work epigenetic functionalike might then be tested, (near here the humans, informed by the software, are directing the experiment generation) to generate a health, cheerfulness, optimism, wellness, measures of romantic-partner-things-going-well that is epigenetically functional at unemployed persons, retirees, and might possibly be “double dosable” at the already employed to confer even greater epigenetic benefits.

Again, the thing is, the software generates very large numbers of screened for human value, and ease of (and high validity of) experiment hypotheses. That goes well with new science discoveries and devleopable technologies that benefit human beings, that is people.

Along with humans reading the lists of easy potentially awesome experiments it is possible software could seek value for humans. The software could winnow the list based on any previously published connection to the 1000 or 10,000 most salient, popular, physically relevant, or beneficially entertaining topics.

There is an actual, if perhaps unimportant risk of looking for things that have already been thought of. If the software goes looking for things that people currently consider beneficial, then humans-in-general might get less mind-share about new things that they would later value but do not, yet, have a preference at numeric methods table based software discovery.

Personal computers during the 20th century AD were new, and contribute much to human well being. But if a 1968 hypothesis ranking/easy test software were active it might promote research into new greetings and new kinds of small talk that were projected to raise interpersonal satisfaction, and quantitatively measured willingness to assist 50%. These are valuable new word forms of conversation if they exist, yet it also would be nice if the software could think of personal computers.
(as an aside, these 50% upgrades on pleasant conversation and willingness to assist already exist; garments and social manners, notably varied at the population, can already do this. I might have read that hitchhikers get picked up several times faster in a suit!, so that is a few hundred percent.)

Also, there is of course a difference between what the upstream from correlation/numeric methods matching/ easy-test hypothesis generating software suggests that people research, create, make and do and the actual amount and content of what they do. There is a lot of variation, and although correlation software may find several things with more marionette-arm activity, that is testable, adjustable directive causation, than money, it is possible that people would only economically participate at things that are both actively beneficial as well as possible to value, and or be enjoyed.

Adjustable things that have an effect; I use the word “marionette” or “marionette arm” to say a thing that has causal, notably mathematically modelable, likely predictable, and experimentally testable structure or object that drives an effect.

Some marionette arms are known, discussed, and frequently people suggest improvements at the known ones. (basically a marionette or marionette arm is like a constellation of contributing effect-drivers, notably derived from looking upstream from correlations at actual published things; also the use of a somewhat periodic-table like list of numeric relations used to find what math apparently describes/drives a then testable thing). Some things that might be marionettes that previously have a mixed reputation, could be improved when people figure out more about them. Things that are described as having “externalities” Free trolley fare causes more employment opportunity but has the externality of reducing people having private vehicles which then reduces recreational activity and scope somewhat.

Money could be measured as having a particular magnitude of effect, and then the software could be directed to describe testable numerically-found things that have a bigger effect. It is possible some of those bigger than money things would be completely new to human beings, thus early technologies might make big improvoments as the 80/20 rule (20% of the effort produces 80% of the results) might be going on, at least as compared with the continuing minute adjustments to money like debit cards. Romance could have higher upstream causative effect at various measurables than money. Not only does being partnered often cause children, and human physiological children are awesome and wonderful, and I think there should be more human physiological children, children possibly adjust fiscal capability more than something like the greater job mobility and earnings increase from a college degree; dissolving a romantic partnership can strongly change expense ratios, perhaps even more than almost any thrifty <->luxiourious city locationing of the person. The software might show romance is a stronger marionette cause than money, and the software could suggest a software-generated list of testable hypothesis with predicted high responsiveness. Giving single people memberships at online personals sites, with a hypothesis that this will produce greater happiness and earnings might be the kind of easy experiment the software generates. Humans would of course, at least sometimes, review and improve the suggested hypotheses and experiments. So a human might modifty the experiment to be: Give single people memberships at online personals sites on their birthday.

There is also value to having the software that matches published data and numeric methods at a kind of periodic table to generate experiments that then support/verify the driving/causal effect of particular variables; having the software specifying and comissioning experiments without human input or consultation. Let’s say there is a published metaanalysis. Just two more studies would give it p<.01, right now it is only at p<.1 The software says to do the experiment, and the higher than one study value of the metaanalysis is gained.

Similarly, at the previously described: find a list of 1000 things that testably could have an epigenetic component, just doing those 1000 experiments avoids having human-perceived-value-bias winnow the group in a way that might miss out or exlude the development,and numeric support of, knowledge that is, when found, amazingly beautiful, or a strong basis for other research, or with tremendous (possibly technologizable) utility. For example, at epigenetics perhaps there is a “clear all previous epigentics” chemical or environmental influence the software might find. Things based his could be highly beneficial medical or even social (videogames? circadian rhythm resetting watch?) technology.

If at a human about whom nothing is known, one third of their epigenetics are beneficial, one third have no numerically discernable effect, and one third are nonbeneficial, then a “clear all epigenetics” thing or method could bring tremendous improvement, or even make it possible to import a custom version of the beneficial epigenetics to the persons life, onto a newly cleaned and prepared physical being. Also, installing new beneficial epigenetics could be something even better than the half of their epigenetics that is already going well.

One area of clearing epigenetics might be resetting the immune system, possibly setting it to a momentary zero. Having longevity promotion tendencies, rapamycin, the immunosupressant that makes rats live 60% longer, could be a human testable immunological reset, with longevity benefits, that could zero parts of the immune system possibly renewing the epigenetic “programmability” of allor just parts ofthe immune system. There are also far out things like a massive one week long high dose bendryl (diphenhydramine) asmuch as causes continuous unconsciousness as a supression of immunoreaction.

It is possible that if a system is epigenetically perturbed, say a surplus of methyl donors from weird food or a period of emotional stress ( I read stress causes epigenetic changes) that it has a spontaneous rate or velocity of moving back to epigenetic neutral from changed. What effects that velocity? It could be genetic, and that suggests things like protein drugs and new small molecule drugs that would then cause more rapid progression back to epigenetic neutral could be developed.

Then there could even be drugs that, when coadministered with something that provides epigenetic benefit, say arginine (possibly) or “love” then cause that epigenetic benefit to be greater. Possible versions of more or deeper epigenetic effects are that the genes that get regulated differently from the epigenetics get regulated at a wider range of tissues. One possibility is increasing the number of ribosomes or increasing the amount of transfer RNA (tRNA), or upping the post translational modification at things like the golgi apparatus. As to new epigenetics drugs, I think a human researcher could make a very big (perhaps easy to very long) list of things that could increase or decrease epigenetic effects produced from a standard effector group. All the growth hormones, even things like insulin and anti-insulin (glucagon); anythng that effects liver metabolism of circulating molecules.

So I may have read that protein production is circadian; It is kind of like protein production, but is mRNA production circadian? Drugs or actions/environemnts effecting anything that contours mRNA production compared with a level state graph would effect all the proteins produced at the body, modifying the entire proteome; and those drugs or possibly environmental effects could be a new Super Epigenetic agent.

So with Super epigenetic agents, is more or less better? Less metabolic activity is published as producing better healthspan and longevity. Longevity improvements from less protein and caloric restriction ups autophagy, so it is different than less wear. So the research then is does adjusting mRNA production, with a variety of different approaches, affect lifespan. Maybe it would be a little like the way things like yeast grow faster with their histones removed, rapidifying mRNA activity at translation. More process-gradualizing histones cause longer lifespan though. So a drug that reduces mRNA activity could be a longevity drug.

Other Super epigenetics approaches/drugs: Perhaps posttranslational modifiers like the golgi apparatus are where a lot of the noncoded epigenetic modification to actual protein molecules, not just the actual amount of protein (mRNA), happens. Drugs or environements that effect the golgi apparatus couldbe quantitatively measured as totheir epigenetic effects. These new drugs or environments might also be a source or way of doing an epigenetics reset to neutral as well.

engineering, chemistry, computer ic, computer fab, longevity, longevity technology, treon, treon verdery, physics, lasers, laser, emiconductor, dimension, math, IT, IL, pattern resonance, time travel, chronotechnology, circile, eric the circle, cartoon, healthspan, youthspan, cpi, manufacturing, fiscal, money, software, petroleum, archive at deviantart com user treonsebastia

All technologies, ideas, and inventions of Treon Sebastian Verdery are public domain at JUly 8,2023AD and previously, as well as after that date

SubjectRepliesAuthor
o Hypothesis-generating software, marionette arms, and epigenetics research.

By: Joe Mardin on Sun, 23 Jul 2023

0Joe Mardin

rocksolid light 0.9.8
clearnet tor