- An average or even less-than-average potential for bodybuilding, if achieved, is stunning to an untrained person, and respected by almost any trained individual.
- Achieving your potential for muscle and might demands extraordinary discipline and dedication. There’s no place for half measures, corner cutting, laziness or lukewarm enthusiasm. If you don’t train well, rest well, sleep well, and eat well, you’ll get nowhere or make only minimal progress.
- You have tremendous control over your physique development, if only you would start to employ it.
- Negative thoughts and negative people will harm all your endeavours. If you imagine failure, dwell on it, and prepare for it, you’ll fail.
- With a good plan, and no time for negativity, you’re set for the confidence and persistence that lead to success. But the journey there will be neither trouble free nor easy.
- Lifting weights is a solo activity over which you alone have the power of control. Once you know what to do, you need rely on no one.
- Never lock yourself into using an exercise if it doesn’t suit you. The number one priority for any exercise is that it does you no harm.
- Don’t focus on what you’ll never be able to do well. Instead, focus on what you can do better.
- Use a rep count for a given exercise that best suits you, get as strong as you can in exercises that suit you and which you can perform safely, keep your body fat levels to below 15% (or below 10% if you want an appearance that’s stunning--assuming that you have some muscle), eat healthfully, perform cardio work, stretch regularly, and then you’ll have the full bodybuilding package.
- Experience has taught me that the conservative approach isn’t just the safest way, it’s the most productive and satisfying over the long term.
- Most people train too much. Not only is this counterproductive for short-term results, it produces the overtraining that wears the body down and causes long-term structural problems.
- The biggest exercises are uncomfortable when done with effort. If they were easy to work hard on, they would do little or nothing for you. But don’t use an exercise that’s harmful for you.
- In this book, “light” weight generally means that the set’s rep target can be met easily, with little or no strain. A “heavy” weight is one that demands much effort to complete the set’s rep target regardless of how many reps that is.
- One of the biggest and most disastrous errors in the training world today is the belief that basics-first abbreviated programs are only for beginners.
- Bodybuilding and strength training are almost laughably simple; but simple doesn’t mean easy. All that really matters is focus, and progressive poundages in correct form.
- Pick a handful of the biggest and best exercises for you and then devote years to getting stronger, and then stronger still in them.
- There’s even danger in using variety because you can lose focus and get caught up in an excessive assortment of exercises.
- Focus on the big basic lifts and their variations. Do this for most of your training time. Don’t try to build yourself up using tools of detail.
- You can’t get very powerful in the key basic exercises without becoming impressive throughout your physique.
- Never train if you don’t feel systemically rested from your previous workout. While some local soreness may remain, you should be systemically rested, and mentally raring to go for every workout. If in doubt, train less often.
- Once-a-week training for the biggest exercises is a good rule of thumb. Fine-tune your training frequency according to your individual recovery ability.
- Unless you walk every morning feeling fully rested, and without having to be awoken, you’re not getting enough sleep. And even if you are making gains in the gym, more rest and sleep could substantially increase your gains.
- Add small poundage increments when you’re training full-bore. Don’t go short-circuiting a cycle by adding a minimum of 5 pounds to eh bar at a shot. Get some pairs of little discs.
- Dependable training for typical people with regular lives is about doing things slowly, safely, steadily and surely. It’s not about trying to do in two months something that needs half a year.
- What matters is what works. If you can gain only from a routine that’s absurd in its brevity and simplicity by conventional standards, fine. But if you can gain well using a routine that most hardgainers wouldn’t gain an ounce on, that’s fine too.
- High reps, especially in thigh and back work, have been proven to pack on loads of muscle.
- Don’t neglect your calves, abdominals, grip, or the external rotators of your shoulders. This accessory work matters.
- Some neck work is mandatory if you’re involved in contact sport, and still a good idea if you’re not. And back extensions will help keep your lower back in good order.
- Once you have a good grasp of training, all you need is persistence and time. Then the realization of your potential for muscles and might is almost guaranteed.
- There’s not much if anything that’s really new in the training world. What’s “new” is usually just a twist on an old idea.
- With a dose of creative lingo and modern-day advertising hoopla, even something that has been around for decades can appear new.
- The dip, done in correct form, is a terrific exercise that’s much underused. It’s at least as productive as the bench press.
- Never, ever let your attention waver from progressive poundages in correct form. Never, that is, until you no longer want to build stronger and bigger muscles.
- Only you can find how far you can go by actually going as far as you can go.
- Urging realistic expectations doesn’t mean accepting mediocrity.
- Age isn’t the limiting factor untrained people usually make it out to be. The limiting factor is in the mind. Expect little from your body and that’s what it will deliver. Expect a lot from it and that’s what it will deliver.
- Never pile on bodyweight by adopting a long-term very-heavy eating program. You want a muscular physique, not a soft or flabby one.
- Once you can strictly press overhead a barbell weighing the equivalent of your bodyweight, you’ll be a better presser than nearly all weight trainees.
- What matters the most to you is your progress, and comparing yourself with yourself. Everything you study and apply related to training should be geared to this.
- You must have a great passion for what you’re doing if you’re to be successful at it. If you try to achieve at something that your heart isn’t really into, and that your body doesn’t respond to, you won’t get far.
- Your immediate short-term target should be to take the next small step towards your next set of medium-term goals.
- Never lose sight of the pivotal importance of progression. Organize your training program, nutrition, sleep, and rest habits so that you make progression a reality.
- To realize your potential, you need to become an achievement-oriented, goal-driven and success-obtaining individual.
- Achievement comes in small steps, but lots of them. Lots of little bits add up to huge achievement.
- Success is rarely an accident in any endeavour. Success in the gym is never a hit or miss activity. It’s planned.
- Deadlines are often imperative for making people take action, in all areas of life.
- Giving something a deadline and urgency, and it usually gets done.
- There’s nothing like the urgency of concentrating on a specific goal by a specific deadline to focus attention, application, and resolve. Without something specific to rally attention and resources, people tend to drift along and never get even close to realizing their potential.
- Avoid getting locked into tunnel vision that keeps you looking at the same sort of targets throughout your training life.
- If you want to be successful in achieving your potential you need to program that success. Step leaving life to chance.
- Achievement in any sphere of life depends on getting the individual moments freight, at least most of the time.
- An essential part of the organization needed to get each workout day right is a training diary. At its most basic this is a written record of reps and poundage for every work set you do, and an evaluation of each workout so that you can stay alert to warning signs of overtraining.
- It’s not enough just to train hard. You need to train hard with a target to beat in every work set you do. The targets to beat in any given workout are your achievements the previous time you performed that same routine/workout.
- Unless you have accurate records of the achievements to be bettered, you can’t be sure that you really are giving your all.
- Most trainees have neither the organization needed for success, nor the will and desire to push themselves very hard when the need to.
- Never become dependent on another person to get in a good workout.
- Effective training has to be intensive, and intensive training is very difficult to deliver on a consistent basis without a demanding taskmaster to urge you to deliver.
- Treat workouts as very serious working time. Get down to business and keep your training partner or supervisor at a distance. Keep your mind focused rigidly on your training.
- Always be able to train well by yourself. Have spells where you intentionally train by yourself, to be sure you can still deliver the goods alone.
- Take as much control over your life as you can. Learn from your mistakes. Capitalize on the good things you’ve done. Do more of the positive things you’re already doing, and fewere of the negative things.
- Any equipment other than the basics nearly always serves to divert attention from where most application should be given.
- Spoil trainees for choice, and you’ll spoil their progress too.
- Supplements will never be the answer to training problems.
- Until you get the basic package of training, food, sleep, and rest in general, to deliver good steady gains in muscle and might, forget about any fine-tuning with supplements.
- Ignore anything and anyone that will hinder your progress. You are in charge of your training. Never surrender that authority to others.
- The advantages you get from a home gym are so profuse and profound that, if you’re serious about training, you should do your utmost to get one.
- Parallel-grip bent-legged deadlifts, plus chins and dips, cover most of the body’s musculature. Just those three exercise, if worked progressively and for long enough, can produce a lot of muscles.
- If you have to load or unload a heavy barbell that rests on the floor, lift the end up and slip a disc underneath the inside plate. The raised end will make plate changing much easier.
- If it’s training day today, and so long as you feel well recovered from your previous workout, train today.
- Use a belt very selectively--for low-rep squats and deadlifts, and overhead presses--or not at all. Otherwise, you’ll become dependent on it to generate the necessary intra-abdominal pressure you need to protect your spinal column during heavy lifting; and without that armor you’ll be a shadow of your usual self.
- Successful weight training in its various forms is about progressive resistance. Despite this being so obviously central to training, it’s implications so often go ignored or only barely noticed. Successful training is about making lots of small bits of progress, with all the bits adding up to huge improvement. To achieve this you must have persistence galore, patience in abundance, and revel in knocking off each little bit of accumulation. Concentrate on knocking off one little bit at a time, and long-term success almost takes care of itself.
- As always, your barometer of progress is poundage progression. If the poundage gains aren’t coming, cut back your training volume by reducing total sets and/or exercises, and perhaps by training less frequently. Less work but harder work, and less total demand upon your recuperative abilities, will usually get the poundage progression back on track.
- If you’re gaining in your core exercises for a given cycle, you’ll be gaining in size and strength generally. The core movements are what you need to focus on as the cycle gets ever heavier and more demanding, and closes in on its end.
- The secondary exercises should not restrict progress in the core movements when you’re focusing on building mass. [...] In practice, to maximize gains on your core exercises, you may need to phase out some of the secondary movements as you approach the end of a cycle, or reduce their training frequency.
- Continue with the slow poundage increments, and progress will continue to feel smooth, good, and strong.
- While it’s essential to add poundage to the bar as often as possible, it’s imperative not to be enslaved by it.
- The greatest number of consecutive full-bore workouts is no good if you’re not recovering fully from each of them.
- Don’t be fullhardy, or else you may regret it later. Back off and come back next week for a hard workout when you’re ready. This aspect of conservatism especially applies once you’re in your thirties, and older.
- Remember, safety first, at all times. Patience, conservatism, and training longevity will serve you best over the long term. Haste and shortcuts invariable backfire. Haste makes waste.
- When adding poundage to the bar, use smaller rather than larger increments.
- Don’t ruin the potential magic of abbreviated routines by adding poundage too quickly, in too large jumps, or by training too frequently.
- Find the time to develop a flexible body and then maintain it.
- Eventually you will reach the point where no progress is being made, or is forthcoming. This represents the end of the cycle for that exercise.
- Conservatism, with few exceptions, is the way to go for most people who lift weights.
- Make haste slowly.
- The harder you train, the less training (volume and frequency) you need to stimulate strength and increase muscular growth.
- As you gain experience of training hard, you’ll learn to tolerate more discomfort.
- A steady diet of extremely heavy weights imposes enormous stress on the body. To pre-exhaust will reduce the size of the poundage needed in the compound movement to deliver a good training effect.
- Training intensity is a means to an end, not the end in itself.
- Training intensity is a fundamental and irreplaceable component of making muscle growth and progressive poundages a reality, but that’s all.
- Whatever you try, never persist with something that doesn’t help to keep your training poundages moving up, no matter how much it may be promoted by others.
- Don’t assume that anyone who claims to be a qualified personal trainer really knows what he’s doing. Strings of letters that indicate certifications of various organizations, or degrees obtained, don’t necessarily signify competence as a coach.
- Potentially, the square may be the most productive single exercise you can do provided you can perform it safely and progressively. The more efficiently you squat, the greater the potential benefits you can extract from it.
- Mastering the squat, and then intensively and progressively squatting on a consistent basis is a linchpin of successful bodybuilding and strength training. Don’t miss out on your chance to exploit this wonderful exercise.
- If you can’t squat safely and productively using a barbell, you should try both the hip-belt squat, and the parallel-grip deadlift.
- The parallel-grip deadlift isn’t just an alternative to the barbell squat. It’s an excellent exercise in its own right.
- Some form of deadlifting should be part of every program.
- If you don’t barbell squat you must find an alternative that at least approaches the quality of the squat. If you don’t barbell squat, you should parallel-grip deadlift, hip-belt squat, or leg press, along with some form of deadlifting for the latter two exercises.
- If you don’t find a good alternative to the barbell squat you’ll greatly reduce the potential value of your training for building muscle, if not almost extinguish it.
- Shrugs done face-down on a bench set at about 45-degrees work the musculature of the upper back differently to the regular standing shrug. In the incline shrug the whole upper back is involved, especially the lower and middle areas of the traps, and the muscles around and between the shoulder blades.
- The incline shrug in particular will help improve posture for people who have rounded shoulders.
- The parallel bar dip works more muscle than does the bench press.
- The developmental effects of the bench press in your particular case should also be a consideration in exercise selection. If you find that you get overly heavy lower pecs from the bench press (or dip), the incline press should be a preferred choice.
- There are seven small areas that shouldn’t be neglected during the focus on the core movements. This support seven can have a big impact for keeping you free of injuries. It’s made up of specific work for your calves, grip, shoulders external rotators, neck, midsection, lower back (work from back extensions additional to that from deadlift variations), and finger extensors (to balance the strength of opposing muscles in your forearms). The leg curl should be included too, to provide hamstring work additional to that given by deadlift variations. And the lateral raise is a valuable isolation exercise, to help produce healthy shoulders.
- Deadlifting using a thick barbell provides tremendous grip work. Add the thick-bar deadlift to your deadlift day’s work, either on top of your regular work if you recuperate well, or instead of a little of your regular work, in order to keep the total volume of work constant.
- Most bodybuilders and strength trainers perform their reps too quickly.
- Perform each rep as an individual unit that ends with a brief pause prior to performing the next rep. Take the time you need to set yourself to perform the next rep perfectly.
- Abbreviated training is the most productive type of training for typical drug-free trainees. The potential effectiveness of abbreviated training rests on the brevity of its routines and its infrequent workout frequency relative to conventional training methods.
- Particularly in the pre-steroids era, working out three times a week with a full-body program was the standard.
- The incline shrug is the best all-purpose upper-back shrug.
- If your training poundages are moving up steadily, and your form is consistently good, you’re bang on course. To be able to gain like this you may not need to train any exercise more often than once every five to seven days. And there are some people who may be better off training some of their exercises and body parts less often than once a week.
- Most people weight train too frequently, and don’t provide enough time for their bodies to grow stronger and bigger. Thus they fail to progress; and in addition they accumulate wear-and-tear injuries because their bodies are being worn down.
- The more progressive workouts you put in, the faster your overall progress will be. But if you train too frequently you’ll not be able to produce many if any progressive workouts.
- It’s a dedication to results that counts most, not a dedication to mere gym attendance.
- Don’t weight train if you’re still dragging your feet from the previous workout.
- Results from your weight training come from using incrementally ever-greater poundages. To achieve progressive training poundages you must rest between workouts long enough to recover from the impact of each workout, and then rest a bit longer so that your body can build a smidgen of extra strength and muscle.
- Successful weight training is about stringing together as many progressive workouts as possible. Twenty progressive workouts over ten weeks will produce far better results than thirty workouts over the same period but with only a handful of them being progressive.
- Weight-training success is about results, not just about how many hours you clock up at a gym.
- Experiment with training frequency to see what delivers steady exercise poundage gains for you. This will be relative to your individual recovery ability and the type of training program you use.
- A good training program that doesn’t yield good results can often be made productive by supplying more rest, sleep, and nutrition.
- “Best” is defined as what consistently produces poundage gain on all your exercises.
- Twenty-rep rest-pause squatting and deadlifting are extremely demanding, which is why they can be extremely productive, but only when combined with a very abbreviated training program, lots of recovery time, and plenty of quality nourishment.
- Single-rep work is exaggerate rest-pause training where the pause between reps is extended to several minutes, making the individual reps into sets of one rep each.
- Remember, many if not most people don’t have the robustness of joints and connective tissue needed to prosper on singles.
- Choose exercises you can safely perform over the long term.
- Once you’ve found your preferred major core exercises, stick with them over the long term. It’s a fallacy that you must regularly change your exercises in order to keep progressing. Changing your exercises around excessively is even counterproductive because it stops you from applying yourself to a given group of exercises for long enough to really milk them dry and make sufficient progress in strength to yield a difference.
- Rather than look for a better way to train, look for ways to recover better between workouts, and to focus better during your workouts so that you can train harder and with better form.
- Training each exercises just once a week doesn't necessarily mean training each body part only once a week.
- Sleeping well on a regular basis is of critical importance. You need to take action to correct any sleeping inadequacies you may have. Otherwise your inability to sleep adequate will continue, your recovery will be compromised, and your rate of gain in muscle and might impaired.
- Unless you wake every morning feeling fully rested, and without having to be awoken, you’re not getting enough sleep. And even if you’re making gains in the gym, more sleep and rest in general could substantially increase your gains.
- Getting adequate sleep is pivotal for enabling your body to recover optimally from training. Most trainees shortchange themselves of sleep, and as a result restrict their rate of progress in the gym.
- Use chalk everywhere you need the help, especially in back exercises and upper-body pressing movements.
- A flexible body helps protect you from injury so long as you don’t perform your stretches in a way that exposes you to injury in the first place.
- Have the courage to swim against the training tide.
- Life is too short to waste any of it on useless training methods.
- The first requirements for realizing a very demanding goal are lots of resolve, heaps of persistence, and tons of effort. [...] There’s no easy way to reach a demanding goal.
- Gravity in general, and the stress of heavy exercise in particular, compress the joints of the spine, and the muscles and soft tissue of the back as a whole, but especially of the lower back. This is at the root of many back problems. But with the appropriate therapy this compression can be relieved, leading to a healthier and more injury-resistant lower back.
- One of the simplest measures for taking care of the lower back is to take pressure off the lumbar spine. Not only is this inversion therapy a preventative measure, it can help during rehabilitation following injury.
- More isn’t better with inversion therapy (as with much of physical training and therapy). Just one minute or so is all you need to achieve maximum decompression of your spine. A longer duration isn’t necessary, and a shorter time may work very well for you.
- Better to have several very short inversions per day rather than one long one. Invert for up to 60 seconds one or more times per day, and you may experience immediate benefits.
- Even if your body can tolerate singles and very low-rep work, avoid using such high-force training for long periods.
- If your body isn’t suited to singles and very low reps, stick to medium and high reps. Use a rep count for a given exercise that suits your body.
- Avoid medium- and high-impact aerobic and cardio work.
- The harder and more seriously you train, the greater the need to satisfy nutritional requirements. Better to oversupply on the nutritional front than undersupply. Don’t give your all in the gym and then sabotage your progress by cutting corners with your diet!
20181218
Beyond Brawn by Stuart McRobert
20181216
Make: FPGAs by David Romano
- For the FPGA hobbyist and DIYers, Papilio by Gadget Factory is second to none.
- Papilio series of FPGA development boards and add-on hardware application modules called “Wings” that plug into the main board--sort of like Arduino shields.
- Logic cell ratings are intended to show the logic density of one Xilinx device as compared to another device.
- Like a logic block, the typical cell includes a couple of flip-flops, multiplier, logic gates, and a small amount of RAM for a configurable lookup table (LUT).
- A block RAM (BRAM) is a dedicated two-port memory block containing several kilobits of RAM.
- A DOA (dead on arrival) test is a very popular test for SoC validation engineers to write (typically it is the first test they write for a new chip or design block). This test usually checks to see if the chip or design has any life at all after power-up.
- Logic synthesis is the process by which the register-transfer level (RTL) of the SoC is turned into a design implementation in terms of logic gates, typically by a CAD tool called a synthesis tool.
- We can use a simple binary counter as a clock frequency divider circuit.
- It’s very important to use a naming convention for labeling--this is a good design habit to get yourself into. For small designs, it is not critical, but in larger desigins or joint designs it becomes essential.
- Verilog modules are like functions in other programming languages. They are pieces of code that can be used and reused within a single program.
- Modules can contain both structural and behavioral statements. Structural statements represent circuit components like logic gates, counters, and microprocessors. Behavioral-level statements are programming statements that have no direct mapping to circuit components like loops, if-then statements, and stimulus vectors that are used to exercise a circuit.
- We now can see that a synchronous circuit consists of two kinds of elements: registers and combination logic. Registers (usually implemented as D flip-flops) synchronize the circuit’s operation to the edges of the clock signal, and are the only elements in the circuit that have memory properties. Combinational logic performs all the logical functions in the circuit, and it typically consists of logic gates.
- Binary counters make good frequency dividers.
- Using a simple LED blink test and an FPGA frequency divider is a good vehicle to test the functionality of a new FPGA platform.
- The RTL consists of two kinds of elements: registers and combinational logic. Registers (usually implemented as D flip-flops) synchronize the circuit’s operation to the edges of the clock signal, and are the only elements in the circuit that have memory properties. Combinational logic performs all the logical functions in the circuit, and it typically consists of logic gates.
- Timing issues are the most challenging issues to debug in an FPGA design. For us, it’s good to know that we will not be working with clock speeds that push the envelope of our FPGA technology.
- Doing a simple DOA simulation first, even for small designs, provides a good sanity check to see if the design works functionally at some level.
- Assigning physical I/O connections can be the most difficult part of the whole design process, because this is where you link the virtual world of your ISE design to the real world of the actual FPGA chip and your particular FPGA module circuit board layout.
- The first thing you need to wrap your head around is that when you are coding in HDL, you are not writing a software program; rather, you are describing digital hardware logic functionality.
- With concurrency, there are no sequential steps of code execution like: “first do this, then do this, then do that.” There really is only one instant in time, and that is the clock tick.
- Think of the HDL you are using as more like describing a block diagram than a flow chart.
- Real hardware operates concurrently.
- Concurrency is what differentiates HDL from other programming languages, which are sequential. Concurrency is not explicit in programming languages like C.
- In the simplest sense, a test bench is a virtual testing environment used to verify that a design does everything it’s supposed to do and doesn’t do anything it’s not supposed to do.
- A test bench applies stimuli (inputs) to the unit under test (UUT), also referred to as the device under test (DUT). It monitors the outputs, providing status and error reporting in a readable and user-friendly format.
- The I2C serial bus is a multimaster, multipoint protocol, unlike the very popular UART serial bus protocol, which is point-to-point only.
- Wishbone is an open source hardware computer bus that uses a single simple, high-speed synchronous specification to connect components together in an SoC.
- Waveforms are very useful for viewing large amounts of data quickly and efficiently.
- Graphical analysis is an easy way to see if there is a difference in test results.
- A test bench is a virtual testing environment used to verify that a design does everything it is supposed to do and doesn’t do anything it’s not supposed to do.
- A test bench applies stimuli (inputs) to the unit under test (UUT), also referred to as the device under test (DUT). It monitors the outputs, providing status and error reporting in a readable and user-friendly format.
- The I2C bus is a low-speed, low-power, two-wire serial bus/protocol that is an industry standard for many peripheral devices used in the electronics and computer industry.
- Brian Stuart of Drexel University has written a great piece on CARDIAC, where he explains how the simple instruction set makes for very easy understanding of how complex programs can be out of simpler sets of operations and data.
- VTACH is an OpenCores FPGA project that is actually a Verilog implementation of the original CARDIAC teaching computer from Bell Labs.
- Simple put, an SoC is a semiconductor microchip that contains multiple electronic components integrated together on a single silicon die. This single chip may contain digital, analog, mixed-signal, and even RF (radio frequency) functions that collectively comprise a complete system.
- Gadget Factory’s DesignLab is a great frontend tool for Xilinx ISE schematic entry.
- The ZPUino soft processor core is a 32-bit processor that is easily programmed like the Arduino and is a great building block for FPGA SoCs.
- SDR is a radio communication system where components that have typically been implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc) are implemented through software, typically in an embedded system or PC.
20181206
Compiler Construction by Niklaus Wirth
- Compilers convert program texts into internal code. Hence the constitute the bridge between software and hardware.
- Genuine understanding of a subject is best acquired from an in-depth involvement with both concepts and details.
- We consider syntax analysis as a means to an end, but not as the ultimate goal.
- Computer programs are formulated in a programming language and specify classes of computing processes.
- The construction of the first compiler for the language Fortran around 1956 was a daring enterprise, whose success was not at all assured.
- The translation process essentially consists of the following parts:
- The sequence of characters of a source text is translated into a corresponding sequence of symbols of the vocabulary of the language.
- The sequence of symbols is transformed into a representation that directly mirrors the syntactic structure of the source text and lets this structure easily be recognized.
- High-level languages are characterized by the fact that objects of programs are classified according to their type. Therefore, in addition to syntactic rules, compatibility rules among types of operators and operands define the language. Hence, verification of whether these compatibility rules are observed by a program is an additional duty of a compiler. This verification is called type checking.
- On the basis of the representation resulting from step 2, a sequence of instructions taken from the instruction set of the target computer is generated. This phase is called code generation.
- A partitioning of the compilation process into as many parts as possible was the predominant technique until about 1980, because until then the available [memory] store was too small to accommodate the entire compiler.
- Modern computers with their apparently unlimited stores make it feasible to avoid intermediate storage on disk. And with it, the complicated process of serializing a data structure for output, and its reconstruction on input can be discarded as well.
- A wise compromise exists in the form of a compiler with two parts, namely a front-end and a back-end. The first part comprises lexical and syntax analyses and type checking, and it generates a tree representing the syntactic structure of the source text. This tree is held in main store and constitutes the interface to the second part which handles code generation.
- The idea of decoupling source language and target architecture has also led to projects creating several front ends for different languages generating trees for a single back-end.
- A compiler which generates code for a computer different from the one executing the compiler is called a cross compiler.
- Every language displays a structure called its grammar or syntax.
- A language is defined by the following:
- The set of terminal symbols. These are the symbols that occur in its sentences.
- The set of non-terminal symbols. They denote classes and can be substituted.
- The set of syntactic equations. These define the possible substitutions of non-terminal symbols.
- The start symbol.
- A language is the set of sequences of terminal symbols which, starting with the start symbol, can be generated by repeated application of syntactic equations, that is, substitutions.
- The idea of defining languages and their grammar with mathematical precision goes back to Noam Chomsky.
- The use of the Chomsky formalism is also responsible for the term programming language, because programming languages seemed to exhibit a structure similar to spoken languages.
- A language us "regular", if its syntax can be expressed by a single EBNF expression.
- The method of recursive descent is only one of several techniques to realize the top-down parsing principle.
- Parsing in the bottom-up direction is also called shift-reduce parsing.
- Bottom-up parsing is in general more intricate and complex than top-down parsing.
- Ultimately, the basic idea behind every language is that it should serve as a means for communication. This means that partners must use and understand the same language.
- The scanner has to recognize terminal symbols in the source text.
- As soon as an unacceptable symbol turns up, the task of the parser is completed, and the process of syntax analysis is terminate.
- We postulate the following quality criteria for error handling:
- As many errors as possible must be detected in a single scan through the text.
- As few additional assumptions as possible about the language are to be made.
- Error handling features should not slow down the parser appreciably.
- The parser program should not grow in size significantly.
- Without doubt, a simple language structure significantly simplifies error diagnostics, or, in other words, a complicated syntax complicates error handling unnecessarily.
- The essential characteristics of a good compiler, regardless of details, are that (1) no sequence of symbols leads to its crash, and (2) frequently encountered errors are correctly diagnosed and subsequently generate no, or few additional, spurious error messages.
- Context is represented by a data structure which contains an entry for every declared identifier. This entry associates the identifier with the denoted object and its properties. The data structure is known by the name symbol table.
- Every declaration results in a new symbol table entry.
- Every occurrence of an identifier in a statement requires a search of the symbol table in order to determine the attributes (properties) of the object denoted by the identifier.
- The simplest form of data structure for representing a set of items is the list. Its major disadvantage is a relatively slow search process, because it has to be traversed from its root to the desired element.
- In order to speed up the search process, the list is often replaced by a tree structure. Its advantage becomes noticeable only with a fairly large number of entries.
- In languages featuring data types, their consistency checking is one of the most important tasks of a compiler. The checks are based on the type attribute recorded in every symbol table entry.
- We must determine the format in which data are to be represented at run-time in the [memory] store. The choice inherently depends on the target architecture, although this fact is less apparent because of the similarity of virtually all computers in this respect.
- Every computer features certain elementary data types together with corresponding instructions, such as integer addition and floating-point addition. These types are invariably scalar types, and they occupy a small number of consecutive memory locations (bytes).
- The size of an array is its element size multiplied by the number of its elements. The address of an element is the sum of the array's address and the elment's index multiplied by the element size.
- Absolute addresses of variables are usually unknown at the time of compilation. All generated addresses must be considered as relative to a common base address which is given at run-time. The effective address is then the sum of this base address and the address determined by the compiler.
- Although bytes can be accessed individually, typically a small number of bytes are transferred from or to memory as a packet, a so-called word.
- If allocation occurs strictly in sequential order it is possible that a variable may occupy (parts of) several words. But this should be avoided, because otherwise a variable access would involve several memory accesses, resulting in an appreciable slowdown.
- A simple method of overcoming this problem is to round up (or down) each variable's address to the next multiple of its size. This process is called alignment.
- The price of alignment is the loss of some bytes in memory, which is quite negligible.
- Following a longstanding tradition, addresses of variables are assigned negative values, that is, negative offsets to the common base address determined during program execution.
- The acronym RISC stands for reduced instruction set computer, where "reduced" is to be understood as relative to architectures with large sets of complex instructions.
- From the viewpoints of the programmer and the compiler designer the computer consists of an arithmetic unit, a control unit and a store.
- Register instructions operate on registers only and feed data through a shifter and the arithmetic logic unit ALU.
- Memory instructions fetch and store data in memory.
- Branch instructions affect the program counter.
- Our ideal computer would be capable of directly interpreting post-fix notation. Such an ideal computer requires a stack for holding intermediate results. Such a computer architecture is called a stack architecture.
- Whenever arithmetic expression are evaluated, the inherent danger of overflow exists. The evaluating systems should therefore be suitably guarded.
- The essence of delayed code generation is that code is not emitted before it is clear that no better solution exists.
- Conditional and repeated statements are implemented with the aid of jump instructions, also called branch instructions.
- Procedures, which are also known as subroutines, are perhaps the most important tool for structuring programs. Because of their frequency of occurrence, it is mandatory that their implementation is efficient. Implementation is based on the branch instructions which saves the current PC value and thereby the point of return after termination of the procedure, when this value is reloaded into the PC register.
- Algol-60 introduced the very fundamental concept of local variables. It implied that every identifier declared had a limited range of visibility and validity.
- In concrete terms, variables may be declared local to a procedure such that they are visible and valid within this procedure only.
- Addresses of local variables generated by the compiler are always relative to the base address of the respective activation frame. Since in programs most variables are local, their addressing also must be highly efficient. This is achieved by reserving a register to hold the base address, and to make use of the fact that the effective address is the sum of a register value and the instruction's address field.
- Parameters constitute the interface between the calling and the called procedures. Parameters on the calling side are said to be actual parameters, and those on the called side formal parameters. The latter are in fact only place holders for which the actual parameters are substituted.
- Most programming languages distinguish between at least two kinds of parameters. The first is the value parameter where, as its name suggests, the value of the actual parameter is assigned to the formal variable. The second kind of parameter is the reference parameters, where, also as suggested by its name, a reference to the actual parameter is assigned to the formal variable.
- The fact that with real numbers only approximate results can be obtained, may be understood by considering that real numbers are represented by scaled integers with a fixed, finite number of digits.
- In many computers, instructions for floating-point operands use a special set of registers. The reason behind this is that often separate co-processors, so-called floating-point units (FPUs) are used which implement all floating-point instructions and contain this set of floating-point registers.
- In order to be complete, a computer's set of instructions must also contain conversion instructions which convert integers into floating-point numbers and vice-versa. The same holds at the level of the programming language.
- Conversions between integer types are simple and efficient, because they consist of a sign extension only.
- An open array is an array parameter whose length is unknown (open) at the time of compilation.
- The two forms of data structures provided in Oberon are the array (all elements of the same type, homogeneous structure) and the record (heterogeneous structure).
- If every pointer variable is initialized to NIL, it suffices to precede every access via a pointer with a test for the pointer value NIL.
- A variable is no longer relevant when there are no references to it.
- The principle advantage of separate compilation is that changes in a module M do not invalidate clients of M, if the interface of M remains unaffected.
- A primary goal of good code optimization is the most effective use of registers in order to reduce the number of accesses to the relatively slow main memory. A good strategy of register usage yields more advantages than any other branch of optimization.
20181205
xv6: a simple, Unix-like teaching operating system
- The job of an operating system is to share a computer among multiple programs and to provide a more useful set of services than the hardware alone supports.
- An operating system provides services to user programs through an interface.
- Each running program, called a process, has memory containing instructions, data, and a stack. The instructions implement the program's computation. The data are the variables on which the computation acts. The stack organizes the program's procedure calls.
- When a process needs to invoke a kernel service, it invokes a procedure call in the operating system interface. Such a procedure call is called a system call. The system call enters the kernel; the kernel performs the service and returns.
- The kernel uses the CPU's hardware protection mechanisms to ensure that each process executing in user space can access only its own memory.
- The shell is an ordinary program that reads commands from the user and executes them, and is the primary user interface to traditional Unix-like systems.
- An xv6 process consists of user-space memory (instructions, data, and stack) and per-process state private to the kernel.
- The exec system call replaces the calling process's memory with a new memory image loaded from a file stored in the file system.
- A process that needs more memory at run-time (perhaps for malloc) can call sbrk(n) to grow its data memory by n bytes; sbrk returns the location of the new memory.
- A file descriptor is a small integer representing a kernel-managed object that a process may read from or write to.
- Internally, the xv6 kernel uses the file descriptor as an index into a per-process table, so that every process has a private space of file descriptors starting at zero.
- By convention, a process read from file descriptor 0 (standard input), writes to file descriptor 1 (standard output), and writes error messages to file descriptor 2 (standard error).
- The read and write system calls read bytes from and write bytes to open files named by file descriptors.
- Each file descriptor that refers to a file has an offset associated with it. Read reads data from the current file offset and then advances that offset by the number of bytes read: a subsequent read will return the bytes following the ones returned by the first read.
- Like read, write writes data at the current file offset and then advances that offset by the number of bytes written: each write picks up where the previous one left off.
- A newly allocated file descriptor is always the lowest-numbered unused descriptor of the current process.
- Fork copies the parent's file descriptor table along with its memory, so that the child starts with exactly the same open files as the parent.
- The dup system call duplicates an existing file descriptor, returning a new one that refers to the same underlying I/O object.
- File descriptors are a powerful abstraction, because they hide the details of what they are connected to: a process writing to file descriptor 1 may be writing to a file, to a device like the console, or to a pipe.
- A pipe is a small kernel buffer exposed to processes as a pair of file descriptors, one for reading and one for writing. Writing data to one end of the pipe makes that data available for reading from the other end of the pipe.
- Pipes provide a way for processes to communicate.
- Pipes allow for synchronization: two processes can use a pair of pipes to send messages back and forth to each other, with each read blocking its calling process until the other process has sent data with write.
- A file's name is distinct from the file itself; the same underlying file, called an inode, can have multiple names, called links.
- The link system call creates another file system name referring to the same inode as an existing file.
- Each inode is identified by a unique inode number.
- Unix's combination of the "standard" file descriptors, pipes, and convenient shell syntax for operations on them was a major advance in writing general-purpose reusable programs.
- The authors of Unix went on to build Plan 9, which applied the "resources are files" concept to modern facilities, representing networks, graphics, and other resources as files or file trees.
- Any operating system must multiplex processes onto the underlying hardware, isolate processes from each other, and provide mechanisms for controlled inter-process communication.
- The implementation of an operating system must achieve three requirements: multiplexing, isolation, and interaction.
- To achieve strong isolation a helpful approach is to disallow applications to have direct access to the hardware resources, but instead to abstract the resources into services.
- In kernel mode, the processor is allowed to execute privileged instructions.
- The software running in kernel space (or in kernel mode) is called the kernel.
- A key design question for an operating system is what part of the operating system should run in kernel mode.
- The unit of isolation in xv6 is a process.
- A process is an abstraction that provides the illusion to a program that it has its own abstract machine.
- The x86 page table translates (or "maps) a virtual address (the address that an x86 instruction manipulates) to a physical address (an address that the processor chip sends to main memory).
- The xv6 kernel maintains many pieces of state for each process, which it gathers into a struct proc. A process's most important pieces of kernel state are its page table, its kernel stack, and its run state.
- Each process has two stacks: a user stack and a kernel stack.
- When a process makes a system call, the processor switches to the kernel stack, raises the hardware privilege level, and starts executing the kernel instructions that implement the system call. When the system call completes, the kernel returns to user space: the hardware lowers its privilege level, switches back to the user stack, and resumes executing user instructions just after the system call instruction.
- Page tables are the mechanism through which the operating system controls what memory addresses mean.
- An x86 page table is logically an array of 2^20 page table entries (PTEs). Each PTE contains a 20-bit physical page number (PPN) and some flags. The paging hardware translates a virtual address by using its top 20 bits with the PPN in the PTE. The paging hardware copies the low 12 bits unchanged from the virtual to the translated physical address.
- A page table is stored in physical memory as a two-level tree. [...] This two-level structure allows a page table to omit entire page table pages in the common case in which large ranges of virtual addresses have no mappings.
- Physical memory refers to storage cells in DRAM. A byte of physical memory has an address, called a physical address. Instructions use only virtual addresses, which the paging hardware translates to physical addresses, and then sends to the DRAM hardware to read or write storage.
- Each process has a separate page table, and xv6 tells the page table hardware to switch page tables when xv6 switches between processes.
- Different processes' page tables translate user addresses to different pages of physical memory, so that each process has private user memory.
- To guard a stack growing off the stack page, xv6 places a guard page right below the stack. The guard page is not mapped and so if the stack runs off the stack page, the hardware will generate an exception because it cannot translate the faulting address.
- Exec is the system call that creates the user part of an address space. It initializes the user part of an address space from a file stored in the file system.
- If the ELF header has the right magic number, exec assumes that the binary is well-formed.
- Small pages make sens when physical memory is small, to allow allocation and page-out to disk with fine granularity.
- Larger pages make sense on machines with lots of RAM, and may reduce overhead for page-table manipulation.
- Today people care more about speed than space-efficiency.
- When running a process, a CPU executes the normal processor loop: read an instruction, advance the program counter, execute the instruction, repeat.
- The term exception refers to an illegal program action that generates an interrupt.
- The term interrupt refers to a signal generated by a hardware device, indicating that it needs attention of the operating system.
- The kernel handles all interrupts, rather than processes handling them, because in most cases only the kernel has the required privilege and state.
- An interrupt stops the normal processor loop and starts executing a new sequence called an interrupt handler.
- It is important to remember that traps are caused by the current process running on a processor, and interrupts are caused by devices and may not be related to the currently running process.
- The x86 has 4 protection levels, numbered 0 (most privileged) to 3 (least privilege). In practice, most operating systems use only 2 levels: 0 and 3, which are then called kernel mode and user mode, respectively.
- The current privilege level with which the x86 executes instructions is stored in the %cs register, in the field CPL.
- On the x86, interrupt handlers are defined in the interrupt descriptor table (IDT). The IDT has 256 entries, each giving the %cs and %eip to be used when handling the corresponding interrupt.
- To make a system cal on the x86, a program invokes the int n instruction, where n specifies the index into the IDT.
- An operating system can use the iret instruction to return from an int instruction. It pops the saved values during the int instruction from the stack, and resumes execution at the saved %eip.
- System calls conventionally return negative numbers to indicate errors, positive numbers for success.
- Interrupts are similar to system calls, except devices generate them at any time.
- A processor can control if it wants to receive interrupts through the IF flag in the eflags register. The instruction cli disables interrupts on the process by clearing IF, and sti enables interrupts on a processor.
- xv6 disables interrupts during booting of the main cpu and the other processors.
- A driver is the piece of code in an operating system that manages a particular device: it provides interrupt handlers for a device, causes a device to perform operations, cases a device to generate interrupts, etc.
- Driver code can be tricky to write because a driver executes concurrently with the device that it manages.
- In many operating systems, the drivers together account for more code in the operating system than the core kernel.
- Typically devices are slower than CPU, so the hardware uses interrupts to notify the operating system of status changes.
- Using DMA means that the CPU is not involved at all in the [data] transfer, which can be more efficient and is less taxing for the CPU's memory caches.
- All modern devices are programmed using memory-mapped I/O.
- A lock provides mutual exclusion, ensuring that only one CPU at a time can hold the lock.
- In one atomic operation, xchg swaps a word in memory with the contents of a register.
- Interrupts can cause concurrency even on a single processor: if interrupts are enabled, kernel code can be stopped at any moment to run an interrupt handler instead.
- It is possible to implement locks without atomic instructions, but it is expensive, and most operating systems use atomic instructions.
- Any operating system is likely to run with more processes than the computer has processors, and so a plan is needed to time-share the processors among the processes.
- Switching from one thread to another involves saving the old thread's CPU registers, and restoring previously-saved registers of the new thread; the fact that %esp and %eip are saved and restored means that the CPU will switch stacks and switch what code it is executing.
- Each pipe is represented by a struct pipe, which contains a lock and a data buffer.
- The xv6 scheduler implements a simple scheduling policy, which runs each process in turn. This policy is called round robin.
- A semaphore is an integer value with two operations, increment and decrement (or up and down). It is always possible to increment a semaphore, but the semaphore value is not allowed to drop below zero.
- The purpose of a file system is to organize and store data.
- One of the most interesting problems in file system design is crash recovery. The problem arises because many file system operations involve multiple writes to the disk, and a crash after a subset of the writes may leave the on-disk file system in an inconsistent state.
- One of the cool aspects of the Unix interface is that most resources in Unix are represented as a file, including devices such as the console, pipes, and of course, real files. The file descriptor layer is the layer that achieves this uniformity.
- All the open files in the system are kept in a global file table, the ftable.
- From our point of view, we can abstract the PC into three components: CPU, memory, and input/output (I/O) devices. The CPU performs computation, the memory contains instructions and data for that computation, and devices allow the CPU to interact with hardware for storage, communication, and other functions.
- A computer's CPU runs a conceptually simple loop: it consults an address in a register called the program counter, reads a machine instruction from that address in memory, advances the program counter past the instruction, and executes the instruction. Repeat.
- A register is a storage cell inside the processor itself, capable of holding a machine word-sized value.
- The modern x86 provides eight general purpose 32-bit registers: eax, ebx, ecx, edx, edi, esi, ebp, and esp--and a program counter eip.
- Registers are fast but expensive.
- Main memory is 10-100x slower than a register, but it is much cheaper, so there can be more of it. One reason main memory is relatively slow is that it is physically separate from the processor chip.
- The x86 processor provides special in and out instructions that read and write values from device address called I/O ports.
20181129
seL4 Reference Manual
- The seL4 microkernel is an operating-system kernel designed to be a secure, safe, and reliable foundation for systems in a wide variety of application domains.
- As a microkernel, it provides a small number of services to applications, such as abstractions to create and manage virtual address spaces, threads, and inter-process communication (IPC).
- A limited number of service primitives are provided by the microkernel; more complex services may be implemented as applications on top of these primitives.
- Threads are an abstraction of CPU execution that supports running software.
- Address spaces are virtual memory spaces that each contain an application. Applications are limited to accessing memory in their address space.
- Inter-process communication (IPC) via endpoints allows threads to communicate using message passing.
- Notifications provide a non-blocking signalling mechanism similar to binary semaphores.
- Device primitives allows device drivers to be implemented as unprivileged applications. The kernel exports hardware device interrupts via IPC messages.
- Capability spaces store capabilities (i.e. access rights) to kernel services along with their book-keeping information.
- The seL4 microkernel provides a capability-based access-control model. Access control governs all kernel services; in order to perform an operation, an application must invoke a capability in its possession that has sufficient access rights for the requested service.
- A capability is an unforgeable token that references a specific kernel object (such as a thread control block) and carries rights that control what methods may be invoked.
- Conceptually, a capability resides in an application's capability space; an address in this space refers to a slot which may or may not contain a capability.
- Capability spaces are implemented as a directed graph of kernel-managed capability nodes.
- Capabilities can also be revoked to withdraw authority. Revocation recursively removes any capabilities that have been derived from the original capability being revoked.
- The seL4 kernel provides a message-passing service for communication between threads. This mechanism is also used for communication with kernel-provided services.
- Logically, the kernel provides three system calls: send, receive, and yield. However, there are also combinations and variants of the basic send and receive calls.
- seL4_Send() delivers a message through the named capability and the application to continue.
- seL4_Recv() is used by a thread to receive messages through endpoints or notifications.
- seL4_Yield() is the only system call that does not require a capability to be used. It forfeits the remainder of the calling thread's time slice and causes invocation of the kernel's scheduler.
- CNodes store capabilities, giving a thread permission to invoke methods on particular objects.
- Thread Control Blocks represent a thread of execution in seL4.
- Endpoints facilitate message-passing communication between threads.
- IPC is synchronous: A thread trying to send or receive on an endpoint blocks until the message can be delivered.
- Notification Objects provide a simple signalling mechanism. A notification is a word-sized array of flags, each of which behaves like a binary semaphore.
- Virtual Address Space Objects are used to construct a virtual address space for one or more threads.
- Interrupt Objects give applications the ability to receive and acknowledge interrupts from hardware devices.
- Untyped Memory is the foundation of memory allocation in the seL4 kernel.
- The seL4 microkernel does not dynamically allocate memory for kernel objects. Instead, objects must be explicitly created from application-controlled memory regions via Untyped Memory capabilities.
- There are no arbitrary resource limits in the kernel apart from those dictated by the hardware, and so many denial-of-service attacks via resource exhaustion are avoided.
- At boot time, seL4 pre-allocates the memory required for the kernel itself, including the code, data, and stack sections (seL4 is a single kernel-stack operating system). It then creates an initial user thread (with an appropriate address and capability space). The kernel then hands all remaining memory to the initial thread in the form of capabilities to Untyped Memory, and some additional capabilities to kernel objects that were required to bootstrap the initial thread.
- Each user-space thread has an associated capability space (CSpace) that contains the capabilities that the thread possesses, thereby governing which resources the thread can access.
- A CNode is a table of slots, each of which may contain a capability. This may include capabilities to further CNodes, forming a directed graph.
- seL4 requires the programmer to manage all in-kernel data structures, including CSpaces, from user-space. This means that the user-space programmer is responsible for constructing CSpaces as well as addressing capabilities within them.
- Capabilities are managed largely through invoking CNode methods.
- Some capability types have access rights associated with them. The access rights associated with a capability determine the methods that can be invoked.
- seL4 supports three orthogonal access rights, which are Read, Write, and Grant.
- Like a virtual memory address, a capability address is simply an integer. Rather than referring to a location of physical memory (as does a virtual memory address), a capability address refers to a capability slot.
- The seL4 microkernel provides a message-passing IPC mechanism for communication between threads. The same mechanism is also used for communication with kernel provided services.
- Messages are sent by invoking a capability to a kernel object.
- Endpoint capabilities may be minted to create a new endpoint capability with a badge attached to it, a data word chosen by the invoker of the mint operation.
- Notifications are a simple, non-blocking signalling mechanism that logically represents a set of binary semaphores.
- A Notification object contains a single data word, called the notification word.
- seL4 provides threads to represent an execution context and manage processor time. A thread is represented in seL4 by its thread control block object (TCB). Each TCB has an associated CSpace and VSpace which may be shared with other threads.
- In multi-core machines, threads run on the same CPU which originally created the TCB.
- seL4 uses a preemptive round-robin scheduler with 256 priority levels. All threads have a maximum controlled priority (MCP) and a priority, the latter being the effective priority of the thread.
- Each thread has an associated exception-handler endpoint. If the thread causes an exception, the kernel creates an IPC message with the relevant details and sends this to the endpoint. This thread can then take appropriate action.
- A thread's actions may result in a fault. Faults are delivered to the thread's exception handler so that it can take the appropriate action. The fault type is specified in the message label.
- User exceptions are used to deliver architecture-defined exceptions.
- Debug exceptions are used to deliver trace and debug related events to threads.
- Domains are used to isolate independent subsystems, so as to limit information flow between them. The kernel switches between domains according to a fixed, time-triggered schedule.
- A thread belongs to exactly one domain, and will only run when that domain is active.
- A virtual address space in seL4 is called a VSpace.
- Common to every architecture is the Page, representing a frame of physical memory.
- A Page object corresponds to a frame of physical memory that is used to implement virtual memory pages in a virtual address space.
- Page faults are reported to the exception handler of the executed thread.
- Interrupts are delivered as notifications. A thread may configure the kernel to signal a particular Notification object each time a certain interrupt triggers.
- IRQHandler capabilities represent the ability of a thread to configure a certain interrupt.
- Access to I/O ports is controlled by IO Port capabilities. Each IO Port capability identifies a range of ports that can be accessed with it.
- I/O devices capable of DMA present a security risk because the CPU's MMU is bypassed when the device accesses memory.
- The seL4 kernel creates a minimal boot environment for the initial thread. This environment consists of the initial thread's TCB, CSpace and VSpace, consisting of frames that contain the user-land image and the IPC buffer.
20181121
Programming Paradigms for Dummies Peter Van Roy
- More is not better (or worse) than less, just different.
- Solving a programming problem requires choosing the right concepts.
- A programming paradigm is an approach to programming a computer based on a mathematical theory or a coherent set of principles.
- A language should ideally support many concepts in a well-factored way, so that the programmer can choose the right concepts whenever they are needed without being encumbered by the others.
- The first key property of a paradigm is whether or not it can express observable nondeterminism.
- We recall that nondeterminism is when the execution of a program is not completely determined by its specification, i.e., at some point during the execution the specification allows the program to choose what to do next.
- The second key property of a paradigm is how strongly it supports state.
- State is the ability to remember information, or more precisely, to store a sequence of values in time.
- Computer programming permits the construction of the most complex systems.
- A programming language is not designed in a vacuum, but for solving certain kinds of problems.
- Declarative programming is at the very core of programming languages.
- Declarative programming will stay at the core for the foreseeable future.
- Deterministic concurrency is an important form of concurrent programming that should not be ignored.
- Message-passing concurrency is the correct default for general-purpose concurrent programming instead of shared-state concurrency.
- The ultimate software-system is one that does not require any human assistance. Such a system is called self-sufficient.
- Programming paradigms are built out of programming concepts.
- A record is a data structure: a group of references to data items with indexed access to each item.
- The record is the foundation of symbolic programming.
- Many important data structures such as arrays, lists, strings, trees, and hash tables can be derived from records.
- When combined with closures, records can be used for component based programming.
- The lexically scoped closure is an enormously powerful concept that is at the heart of programming.
- From an implementation viewpoint, a closure combines a procedure with its external references.
- Many abilities normally associated with specific paradigms are based on closures.
- Component-based programming is a style of programming in which programs are organized as components, where each component may depend on other components.
- To implement independence we need a new programming concept called concurrency. When two parts do not interact at all, we say they are concurrent.
- COncurrency is a language concept, and parallelism is a hardware concept.
- The fundamental difference between processes and threads is how resource-allocation is done.
- The operating system’s chief role is to arbitrate the resource requests done by all the processes and to allocate resources in a fair way.
- Despite their popularity, monitors are the most difficult concurrency primitive to program with.
- State introduces an abstract notion of time in programs.
- A good rule is that named state should never be invisible: there should always be some way to access it from the outside.
- The main advantage of named state is that a program can become modular. The main disadvantage is that the program can become incorrect.
- A user of a data abstraction does not need to understand how the abstraction is implemented.
- Object-oriented programming, as it is usually understood, is based on data abstraction with polymorphism and inheritance.
- In computer programming, we see an entity is polymorphic if it can take arguments of different types. This ability is very important for organizing large programs so that the responsibilities of the program’s design are concentrated in well-defined places instead of being spread out over the whole program.
- Instead of inheritance, we recommend to use composition instead.
- One of the major problems of concurrent programing is nondeterminism.
- Debugging and reasoning about programs with race conditions is very difficult.
- In lazy execution, it is the consumer of a result that decides whether or not to perform a calculation, not the producer.
- Lazy execution does the least amount of calculation needed to get a result.
- Decades of research show that parallel programming cannot be completely hidden from the programmer: it is not possible in general to automatically transform an arbitrary program into a parallel program.
- The best we can do is make parallel programming as easy as possible.
- Repeated code is a source of errors: if one copy is fixed, all copies have to be fixed.
- The programming language and its libraries should help not hinder the programmer.
- Programming languages should support several paradigms because different problems require different concepts to solve them.
- Each paradigm has its own “soul” that can only be understood by actually using the paradigm.
20181107
Project Oberon by Niklaus Wirth
- In spite of great leaps forward, hardware is becoming faster more slowly than software is becoming slower.
- The vast complexity of popular operating systems makes them not only obscure, but also provides opportunities for “back doors”.
- Mostly thanks to the regularity of the RISC instruction set, the size of the compiler could be reduced significantly.
- Our programs should be expressed in a manner that makes no reference to machine peculiarities and low-level programming facilities, perhaps with the exception of device interfaces, where dependence is inherent.
- In order to warrant the sizeable effort of designing and constructing an entire operating system from scratch, a number of basic concepts need to be novel.
- The fundamental objective of an operating system is to present the computer to the user and to the programmer at a certain level of abstraction.
- Every abstraction is characterized by certain properties and governed by a set of operations.
- Every abstraction inherently hides details, namely those from which it abstracts.’
- High interactivity requires high bandwidth, and the only channel of human users with high bandwidth is the eye. Consequently, the computer’s visual output unit must be properly matched with the human eye.
- In the Oberon system, the display is partitioned into viewers, also called windows, or more precisely frames, rectangular areas of the screen.
- High interactivity requires not only a high bandwidth for visual output, it demands also flexibility of input.
- In Oberon, the notion of a unit of action is separated from the notion of a unit of compilation.
- One of the rules of what may be called the Oberon programing style is therefore to avoid hidden states, and to reduce the introduction of global variables.
- We classify Oberon as a single-process (or single-thread) system. [...] Unless engaged in the interpretation of a command, the processor is engaged in a loop continuously polling event sources. This loop is called the central loop; it is contained in module Oberon which may be regarded as the system’s heart.
- The primary advantage of a system dealing with a single process is that task switches occur at user-defined points only, where no local process state has to be preserved until resumption.
- The Oberon system features no separate linker. A module is linked with its imports when it is loaded, never before.
- The high priority given in the system’s conception to modularity, to avoid unnecessary frills, and to concentrate on the indispensable in the core, has resulted in a system of remarkable compactness.
- We do not consider it as good engineering practice to consume a resource lavishly just because it happens to be cheap.
- Implementation of a system proceeds bottom-up. Naturally, because modules on higher levels are clients of those on the lower levels and cannot function without the availability of their imports.
- Description of a system, on the other hand, is better ordered in the top-down direction. This is because a system is designed with its expected applications and functions in mind.
- Commands in Oberon are explicit, atomic units of interactive operations.
- It is the generic ability to perform every conceivable task that turns a computing device into a versatile universal tool.
- Transfers of control between tasks are implemented in Oberon as ordinary calls and returns of ordinary procedures. Preemption is not possible.
- Interactive tasks are triggered by input data being present, either from the keyboard, the mouse, or other input sources. Background tasks are taken up in a round-robin manner. Interactive tasks have priority.
- The most important generic function of any operating system is executing programs.
- Quintessentially, Oberon programs are represented in the form of commands that are in the form of exported parameterless procedures that do not interact with the user of the system.
- The concept of abstraction is arguably the most important achievement of programming language development.
- The term loading refers to the transfer of the module code from the file into the main memory, from where the processor fetches individual instructions.
- The linking process may require a significant amount of address computations.
- The purpose of the loader is to read object files, and to transform the file representation of modules into their internal image.
- It is essential that a computer system has a facility for storing data over longer periods of time and for retrieving the stored data. Such a facility is called a file system.
- A file system must not only provide the concept of a sequence with its accessing mechanism, but also a registry. This implies that files be identified, that they can be given a name by which they are registered and recieved.
- Experience shows that in practice most files are quite short, i.e. in the order of a few thousand bytes.
- A crucial property of the Oberon system is centralized resource management.
- Device drivers are collections of procedures that constitute the immediate interface between hardware and software.
- Drivers are inherently hardware specific, and the justification of their existence is precisely that they encapsulate these specifics and present to their clients an appropriate abstraction of the device.
- They keyboard codes received from the keyboard via a PS/2 line are not identical with the character values delivered to the Read procedures. A conversion is necessary. This is so, because modern keyboards treat all keys in the same way, including the ones for upper case, control, alternative, etc. Separate codes are sent to signal the pushing down and the release of a key, followed by another code identifying which key had been pressed or released.
- Oberon is a single-process system where every command monopolizes the processor until termination.
- It appears to be a universal law that centralization inevitably calls for an administration.
- The compiler is the primary tool of the system builder.
- Compilation of a program text proceeds by analyzing the text and thereby decomposing it recursively into its constructs according to the syntax. When a construct is identified, code is generated according to the semantic rule associated with the construct.
- The recognition of symbols within a character sequence is called lexical analysis.
- Procedure bodies are surrounded by by a prolog (entry code) and an epilog (exit code).
- Besides the parsing of text, the Parser also performs the checking for type consistency of objects.
- The superiority of a tree structure becomes manifest only when a large number of global objects is declared.
- Procedure calls cause a sequence of frames to be allocated in a stack fashion. These frames are the storage space for local variables.
- Static typing is an important principle in programming languages. It implies that every constant, variable or function is of a certain data type, and that this type can be derived by reading the program text without executing it. It is the key principle to introduce important redundancy in languages in such a form that a compiler can detect inconsistencies. It is therefore the key element for reducing the number of errors in programs.
- Implementation of multiplication in hardware made the operation about 30 times faster than its solution by software.
- The principal task of the control unit is to generate the address of the next instruction.
- The system’s core consists of a loop which consistently senses for a command to appear.
- Make it as simple as possible, but not simpler.
- Oberon is a general-purpose programming language that evolved from Modula-2. Its principle new feature is the concept of type extension.
20181105
Crafting Interpreters by Bob Nystrom
- Static type systems in particular require rigorous formal reasoning.
- Being able to reason precisely and formally about about syntax and semantics is a vital skill when working on a language.
- For every successful general-purpose language out there, there are a thousand successful niche ones.
- Implementing a language is a real test of programming skill.
- You must master recursion, dynamic arrays, trees, graphs, and hash tables.
- A compiler reads in files in one language and translates them to files in another language. You can implement a compiler in any language, including the same language it compiles, a process called “self-hosting”.
- C is the perfect language for understanding how an implementation really works, all the way down to the bytes in memory and the code flowing through the CPU.
- The first step is scanning, or lexing, or (if you’re trying to impress someone) lexical analysis.
- A scanner takes in a linear stream of characters and chunks them together into a series of something more akin to “words”. In programming languages, each of these words is called a token.
- The next step is parsing. This is where our syntax gets a grammar--the ability to compose larger expressions and statements out of smaller parts.
- A parser takes a flat sequence of tokens and builds a tree structure that mirrors the nested nature of the grammar.
- The first bit of analysis that most languages do is called binding or resolution. For each identifier we find out where that name is defined and wire the two together. This is where scope comes into play--the region of source code where a certain name can be used to refer to a certain declaration.
- The most powerful bookkeeping tool is to transform the tree into an entirely new data structure that more directly expresses the semantics of the code.
- You can think of the compiler as a pipeline where each state's job is to organize the code in a way that makes the next stage simpler to implement.
- If some expression always evaluates to the exact same value we can do the evaluation at compile time and replace the code for the expression with its result.
- Optimizing is a huge part of the programming language business.
- Many successful languages have surprisingly few compile-time optimizations.
- Native code is lightning fast, but generating it is a lot of work.
- Speaking the chip’s language also means your compiler is tied to a specific architecture.
- In a fully compiled language, the code implementing the runtime gets inserted directly into the resulting executable.
- If the language is run inside an interpreter or a VM, then the runtime lives there.
- Compiling is an implementation technique that involves translating a source language to some other--usually lower-level--form.
- When we say a language implementation “is a compiler”, we mean it translates source code to some other form but doesn’t execute it.
- When we say an implementation “is an interpreter”, we mean it takes in source code and executes it immediately.
- C’s most egregious grammar problems are around types.
- A static type system is a ton of work to learn and implement.
- High-level languages exist to eliminate error-prone, low-level drudgery.
- There are two main techniques for managing memory: reference counting and tracing garbage collection.
- Where an expression’s main job is to produce a value, a statement’s job is to produce an effect.
- An argument is an actual value you pass to a function when you call it.
- A parameter is a variable that holds the value of the argument inside the body of the function.
- Prototypes are simpler in the language, but they seem to accomplish that only by pushing the complexity onto the user.
- The idea behind object-oriented programming is encapsulating behavior and state together.
- If you care about making a language that is actually usable, then handling errors gracefully is vital.
- It’s a good engineering practice to separate the code that generates the errors from the code that reports them.
- Our job is to scan through the list of characters and group them together in the smallest sequences that still represent something. Each of these blobs of characters is called a lexeme.
- An interactive prompt is also called a REPL.
- That’s a token: a bundle containing the raw lexeme along with the other things the scanner learned about it.
- The core of the scanner is a loop. Starting at the first character of the source code, it figures out what lexeme it belongs to, and consumes it and any following characters that are part of that lexeme. When it reaches the end of that lexeme, it emits a token.
- Maximal munch: When two grammar rules can both match match a chunk of code that the scanner is looking at, whichever one matches the most characters wins.
- Regular expressions aren’t powerful enough to handle expressions which can nest arbitrarily deeply.
- A formal grammar takes a set of atomic pieces it calls its “alphabet”. Then it defines a (usually infinite) set of “strings” that are “in” the grammar. Each string is a sequence of “letters” in the alphabet.
- A formal grammar’s job is to specify which strings are valid and which aren’t.
- The visitor pattern is really about approximating the functional style within an OOP language.
- Unlike overriding, overloading is statically dispatched at compile time.
- When we debug our parser and interpreter, it’s often useful to look at a parsed syntax tree and make sure it has the structure we expect.
- Converting a tree to a string is sort of the opposite of a parser, and is often called “pretty printing” when the goal is to produce a string of text that is valid syntax in the source language.
- Writing a real parser--one with decent error-handling, a coherent internal structure, and the ability to robustly chew through a sophisticated syntax--is considered a rare, impressive skill.
- Precedence determines which operator is evaluated first in an expression containing a mixture of different operators.
- Associativity determines which operator is evaluated first in a series of the same operator.
- Recursive descent is the simplest way to build a parser, and doesn’t require using complex parser generator tools.
- Recursive descent parsers are fast, robust, and can support sophisticated error handling.
- A parser really has two jobs:
- Given a valid sequence of tokens, produce a corresponding syntax tree.
- Given an invalid sequence of tokens, detect any errors and tell the user about their mistakes.
- Syntax errors are a fact of life and language tools have to be robust in the face of them.
- The way a parser responds to an error and keeps going to look for later errors is called “error recovery”.
- The traditional place in the grammar to synchronize is between statements.
- Taking advantage of what users already know is one of the most powerful tools you can use to ease adoption of your language. It’s almost impossible to underestimate how useful this is.
- A literal is a bit of syntax that produces a value.
- Once you misinterpret bits in memory, all bets are off.
- While a runtime error needs to stop evaluating the expression, it shouldn’t kill the interpreter. If a user is running the REPL and has a typo in a line of code, they should still be able to keep going and enter more code after that.
- Some language are statically typed which means type errors are detected and reported a compile type before any code is run.
- Others are dynamically typed and defer checking for type errors until runtime right before an operation is attempted.
- A key reason users choose statically typed languages is because of the confidence the language gives them that certain kinds of errors can never occur when their program is run.
- To support bindings, our interpreter needs internal state.
- State and statements go hand in hand. Since statements, by definition, don’t evaluate to a value, they need to do something else to be useful. That something is called a “side effect”.
- A single token lookahead recursive descent parser can’t see far enough to tell that it’s parsing an assignment until after it has gone through the left-hand side and stumbled onto the =.
- The key difference between assignment and definition is that assignment is not allowed to create a new variable.
- A scope is a region where a name maps to a certain entity. Multiple scopes enable the same name to refer to different things in different contexts.
- Lexical scope is a specific style of scope where the text of the program itself shows where a scope begins and ends.
- This is in contrast with dynamic scope where you don’t know what a name refers to until you execute the code.
- One motivation for lexical scope is encapsulation--a block of code in one corner of the program shouldn’t interfere with some other one.
- When a local variable has the same name as a variable in an enclosing scope, it shadows the outer one. Code inside the block can’t see it anymore, but it’s still there.
- The main advantage to implicit declaration is simplicity.
- In fact, any programming language with some minimum level of expressiveness is powerful enough to compute any computable function.
- Reducing syntactic sugar to semantically equivalent but more verbose forms is called desugaring.
- A rich syntax makes the language more pleasant and productive to work in.
- Arity is the fancy term for the number of arguments a function or operation expects.
- When it comes to making your language actually good at doing useful stuff, the native functions your implementation provides are key. They provide access to the fundamental services that all programs are defined in terms of.
- Many languages also allow users to provide their own native functions. The mechanism for doing so is called a foreign function interface (FFI), native extension, native interface, or something along those lines.
- By using different environments when we execute the body, calls to the same function with the same code can produce different results.
- When a function is declared, it captures a reference to the current environment.
- A closure retains a reference to the environment instance in play when the function was declared.
- There are three broad paths to object-oriented programming: classes, prototypes, and multimethods.
- Doing a hash table lookup for every field access is fast enough for many language implementations, but not ideal.
- Methods and fields let us encapsulate state and behavior together so that an object always stays in a valid configuration.
- Prototypes are simpler than classess--less code for the language implementer to write, and fewer concepts for the user to learn and understand.
- Breadth is the range of different things the language lets you express.
- Ease is how little effort it takes to make the language do what you want.
- Complexity is how big the language is.
- INheriting from another class means that everything that’s true of the superclass should be true, more or less, of the subclass.
- A tree-walk interpreter is fine for some kinds of high-level, declarative languages. But for a general-purpose, imperative language--even a “scripting” language, it won’t fly.
- A dynamically-typed language is never going to be as fast as a statically-typed language with manual memory management.
- In engineering, few choices are without trade-offs.
- Modern CPUs process data way faster than they can pull it from RAM. To compensate for that, chips have multiple layers of caching.
- Many implementations of malloc() store the allocated size in memory right before the returned address.
- C asks us not just to manage memory explicit, but mentally.
- A trie stores a set of strings.
- Tries are a special case of an even more fundamental data structure: a deterministic finite automata.
- We sometimes fall into the trap of thinking that performance comes from complicated data structures, layers of caching, and other fancy optimizations. But, many times, all that’s required is to do less work, and I often find that writing the simplest code I can is sufficient to accomplish that.
- Bytecode was good enough for Niklaus Wirth.
- Pratt parsers are a sort of oral tradition in industry.
- Vaughn Pratt’s “top-down operator precedence parsing” is the most elegant way I know to parse expressions.
- A compiler has roughly two jobs. It parses the user’s source code to understand what it means. Then it takes that knowledge and outputs low-level instructions that produce the same semantics.
- Good error handling and reporting is more valuable to users than almost anything else you can put time into in the front end.
- You wouldn’t believe the breadth of software problems that miraculously seem to require a new little language in their solution as soon as you ask a compiler hacker for help.
- The nice thing about working in C is that we can build our data structures from the raw bits up.
- A value contains two parts: a “type” tag, and a payload for the actual value.
- A union looks like a struct except that all of its fields overlap in memory.
- The size of a union is the size of its largest field.
- Most architectures prefer values to be aligned to their size.
- A bytecode VM spends much of its execution time reading and decoding instructions. The fewer, simpler instructions you need for a given piece of behavior, the faster it goes.
- There’s no maximum length for a string.
- We need a way to support values whos size varies, sometimes greatly. This is exactly what dynamic allocation on the heap is designed for.
- The [garbage] collector must ensure it it can find every bit of memory that is still being used so that it doesn’t collect live data.
- If your language needs GC, get it working as soon as you can.
- Choosing a character representation and encoding involves fundamental trade-offs.
20181030
Easy 6502
Easy 6502
- Assembly language is the lowest level of abstraction in computers -- the point at which the code is still readable.
- Assembly language translates directly to the bytes that are executed by your computer's processor.
- SP is the stack pointer.
- PC is the program counter--it's how the processor knows at what point in the program it currently is. Its like the current line number of an executing script.
- Instructions in assembly language are like a small set of predefined functions.
- The zero flag is set by all instructions where the result is zero.
- In assembly language, you'll usually use labels with branch instructions. When assembled though, this label is converted to a single-byte relative offset (a number of bytes to go backwards or forwards from the next instruction) so branch instructions can only go forward and back around 256 bytes.
- Remember that a byte is represented by two hex characters.
- The stack in a 6502 processor is just like any other stack--values are pushed onto it and popped of it. The current depth of the stack is measured by the stack pointer, a special register.
- JMP is an unconditional jump.
- JSR and RTS ("jump to subroutine" and "return from subroutine") are a dynamic duo that you'll usually see used together. JSR is used to jump from the current location to another part of the code. RTS returns to the previous position. This is basically like calling a function and returning.
- The processor knows where to return to because JSR pushes the address minus one of the next instruction onto the stack before jumping to the given location. RTS pops this location, adds one to it, and jumps to it.
- We can define descriptive constants (or symbols) that represent numbers. The rest of the code can then simply use the constants instead of the literal number, which immediately makes it obvious what we're dealing with.
- Nearly all games have at their heart a game loop. All game loops have the same basic form: accept user input, update the game state, and render the game state.
Subscribe to:
Posts (Atom)