design - Self Testing Systems -
i had idea mulling on colleagues. none of knew whether or not exists currently.
basic premise have system has 100% uptime can become more efficient dynamically.
here scenario:
* hash out system specified set of interfaces, has 0 optimizations, yet confident 100% stable though (dubious, sake of scenario please play along)
* profile original classes, , start program replacements bottlenecks.
* original , replacement initiated simultaneously , synchronized.
* original allowed run completion: if replacement hasn´t completed vetoed system replacement original.
* replacement must return same value original, specified number of times, , specific range of values, before adopted replacement original.
* if exception occurs after replacement adopted, system automatically tries same operation class superseded it.
have seen similar concept in practise? critique please ...
below comments written after initial question in regards posts:
* system demonstrates darwinian approach system evolution.
* original , replacement run in parallel not in series.
* race-conditions inherent issue multi-threaded apps , acknowledge them.
i believe idea interesting theoretical debate, not practical following reasons:
- to make sure new version of code works well, need have superb automatic tests, goal hard achieve , 1 many companies fail develop. can go on implementing system after such automatic tests in place.
- the whole point of system performance tuning, - specific version of code replaced version supersedes in performance. applications today, performance of minor importance. meaning, overall performance of applications adequate - think it, find complaining "this application excruciatingly slow", instead find complaining on lack of specific feature, stability issues, ui issues etc. when complain slowness, it's overall slowness of system , not specific applications (there exceptions, of course).
- for applications or modules performance big issue, way improve them identify bottlenecks, write new version , test independently of system first, using kind of benchmarking. benchmarking new version of entire application might necessary of course, in general think process take place small number of times (following 20%-80% rule). doing process "manually" in these cases easier , more cost-effective described system.
- what happens when add features, fix non-performance related bugs etc.? don't benefit system.
- running 2 versions in conjunction compare performance has far more problems might think - not might have race conditions, if input not appropriate benchmark, might wrong result (e.g. if loads of small data packets , in 90% of time input large data packets). furthermore, might impossible (for example, if actual code changes data, can't run them in conjunction).
the "environment" sounds useful , "a must" "genetic" system generates new versions of code itself, that's whole different story , not applicable...
Comments
Post a Comment