Monday, December 30, 2019

Design and Development

Sample details Pages: 30 Words: 8973 Downloads: 4 Date added: 2017/06/26 Category Statistics Essay Did you like this example? Complexity measures in design and development Albert Einstein once said Everything should be made as simple as possible, but no simpler. These simple words coming out of a geniuss mind carry a lot of meaning and depth in them. The last two decades saw an exponential rise in different branches of engineering and sciences; and with these developments came in a crowd of very advanced yet very complicated technologies. Are these complexities intended? The answer is very simple NO; the very word advanced is almost always inherited by complexity. No one would want to design and manufacture something very complex, but the more advanced a technology is, the more complex it gets. Even the works done by the man who said those words are far too complex for a common man; and actually that is exactly what he meant. Everyone tries to make things as simple as possible, but no simpler than that, they just cant and the simplest product design possible can become very complex in some regards. Don’t waste time! Our writers will create an original "Design and Development | Engineering Dissertations" essay for you Create order With increasing complexity, there is always a danger of system being destabilized, reduction in overall performance, higher cost, higher maintenance, etc. The way to keep control over complexity is to have measure of it, so that management and manufactures know what exactly to-do and how to change their operational strategy. In this dissertation I am presenting a detailed overview of complexity, its different meanings and interpretations in various industries and a host of measures that were developed to measure and evaluate complexity. There are also a few methods of minimizing complexity presented along with case studies illustrating the means in which these measures were applied to real-time manufacturing and designing processes. Complexity: What is complexity? could be one of the most complex questions tube answered. The very definition of complexity from dictionary would suggest the following: Consisting of interconnected or interwoven parts Composed of two or more units Offers great difficulty in understanding, solving, or explaining The interlacing of parts so as to make it nearly impossible to follow or grasp them separately Extreme complication and often disorder; complication and entanglement that make solution or understanding improbable The first two meanings are not too related to our present context so I will ignore them, but the rest suggest the exact meaning of what complexity is. As we can see, if I do not understand something properly or am not capable enough to understand it, that thing is complex tome. Does this mean it is really complex? The answer again is very simple, NO and that is the very reason why complexity is so hard to define. Complexity of anything is dependent on many factors and one very important factor is human understanding. A subject complex to me could be a piece of cake for someone else and this very behaviour of complexity makes it very hard to measure and evaluate it. An important and interesting question that may arise in readers minds, Does the very same meaning of complexity stand in industry standards too? The answer could be both a YES and a NO. It does stand the same thing in some cases, but in rest, the definition of complexity is completely modified. A best example would be industries involved in computer sciences and engineering. There complexity of a code does not really mean it is hard to understand, it rather means that it takes a lot of time for computer to calculate and give the results. In most of the mechanical and electronic designs, complexity would mean systems with multiple interacting parts, the behaviour of which cannot be related with respect to individual parts. That is their collectivebehavior is completely different and /or unpredictable from their individual behaviour. Again this unpredictability can be related to just the static structure of these components or dynamic nature, thus the differentiation between static complexity and dynamic complexity. Now Avery good question to answer is What exactly are these static and dynamic complexities? Static Complexity: Given a particular system (could be any system like manufacturing plant with different manual and automated equipment or just a small network with multiple servers/ clients ), there is always some complexity involved with the static structure of these components, could be just their physical shapes/ sizes or their alignment with other objects or with environment. This complexity which is made up as a function of various parameters like physical shapes, structures, connectivity, variety and strengths of components is called static complexity. Dynamic complexity: Dynamic complexity is more related towards the behaviour of these components as a unit. As mentioned earlier, the pattern of behaviour for a group of components is almost always different than the pattern for individual components; this behaviour measured over a period of time is a major parameter in dynamic complexity. A very important form of complexity that is normally taken as constant or zero while evaluating static or dynamic complexities is complexity arising because of control; that is, given a particular system, there could be many ways in which it can controlled and each one of these methods can result in a different static / dynamic complexity measure, thus to really evaluate a system, we should also consider this parameter and measure a control complexity too. But for most practical purposes, it is assumed that there is only one way to control, thus ignoring control complexity. Measuring and evaluating dynamic complexity is highly dependent on the industry and its specific design, thus forming generic measures for dynamic complexity is not only complicated but also inapplicable invest majority of other designs. Thus research is more focused towards static complexity and its measures. Though there are papers which concentrate only on dynamic complexity, they are very much oriented towards a specific industry and its related fields. Does this mean that static complexity is similar for all industries and designs? No, but a particular measure calculated for static complexity could be easily expanded to other designs too, which is not the same for dynamic complexity. In this paper, I will give measurements of both static and dynamic complexities with respect to manufacturing environment. Before we go any further into measurement of complexity, it is a good idea to understand why and how complexity arises in systems? There is general belief that complexity arises due to many random factors. It could be true in some sense, but that only indicates a very bad design. For systems which are well designed, manufactured and maintained, randomness is not a major factor; it is rather the fact that the system cannot be easily described which causes more complexity. According to Axelrod and Cohen describe systems as comprising of agents and artifacts. The artefacts are just physical (or virtual) objects that comprise the system where as the agents who have attributes like location, memory, ability to interact with other agents, ability to manipulate and change functions, control these objects. The agents may not have to be people alone; they can be computer programs, groups, political entities etc. that may affect the system directly or indirectly. Another important and very interesting concept of complexity comes from Wolfram, in which he states that complexity in a system comes from randomness produced because of three sources. The first source is the environment and its intervention, either directly or indirectly, on the system. The second source is the initial conditions that the system was in, before being used. These initial conditions could be random thus adding more weight to the complexity. The third and most important one is the internal or intrinsic complexity of the system. That is the complexity of the system when there is no external influence or affect. With all these different views of complexity, we are now ready to go ahead and describe what complexity in design is? But before that, let us see what exactly design means. Design and Why is it done? In this section, let us see what exactly design means and as in every case, let us start with the exact dictionary definition of design. To conceive or fashion in the mind; invent To formulate a plan for; devise To plan out in systematic, usually graphic form To create or contrive for a particular purpose or effect To have as a goal or purpose; intend To create or execute in an artistic or highly skilled manner These, may be with little twists here and there, are the exact definitions of Design that you see in dictionaries and they almost suggest what precisely designing means in industry standards. Basically designing involves making things better and more useful to customers (or people). Almost every single thing that we use is (/was)designed at some point of time; things that we take for granted were once designed and engineered. Design is an integral part of us and our society and is united in almost everything and anything we do and we use. Designing anything starts with an idea, any idea good or bad. The main job of designers is to reform this idea so that it is understandable for the people who need to work on it and manufacture it, sort of able print. Whether it is a multibillion dollar dam or a small fashion hair pin, the process of creating them is all the same and involves almost the same basic general steps. Before we discuss these steps, its important for us to understand why designing is done on the first place? Designing is a very important and basic step in any product. To deliver product, there are many steps involved. There are scientists who invent new technologies, engineers who use these technologies to develop various components, manufactures that use these components in manufacturing different products and finally marketers who take the prime role behind delivering the product. But who is going to coordinate their efforts to produce a desirable and successful product, no one else but the designers. They are the people who understand what the customer want and deliver a mechanism to make it happen. Designing is not just what we read in magazines which depict it as mostly involved with fashion industry. No that is not at all the case it involves a lot of insight into the way customer thinks and wants his product. As I mentioned earlier, everything that we use was designed at some point of time. There are some very important things that every design is supposed to follow, a brief list of which is as follows: User requirements: The very first and most important aspect designer has to consider is user. In this world where the number of competitors for a product is more than the product itself, there are very few designs which are being accepted into user community. The main reason behind their success being complete satisfaction of customers. The very first step of any design process is to know what exactly customers want? Creativity: Next comes creativity. When the designer knows what the customer wants, he has to create something new; he just cant give the same old stuff which just satisfies the needs. If I am buying a camera, being a picky customer I wouldnt buy any camera that can take a snap of me, NO, I want so many other things which may be I will not even dream of using, but still I want them in my camera. That is how customers think and that is what exactly designers provide. Innovation has to be there in a product without which there is no value to it.Designers explore all the different combinations in which a product can be designed and seek new methods in doing so. The stranger it is the better. Now that there are so many simulation tools and other devices that provide so much insight into the product even before it is made, this work is simplified a lot. Business process: the other very important consideration a designer has to go through is the business process both from company perspective and user perspective. The overall price of the product may depend on the design and considering this is a very important thing. The best examples are the products from Microsoft, take for example PowerPoint, though costing only 50-100 dollars, its overall revenue may be grater than some very big software companies whose products cost millions of dollars. Why? The design was so done that there are millions of satisfied customers to PowerPoint or excel who can afford it easily than to products that cost millions of dollars. Manufacturing overview: it is also very important for a designer tube thoroughly aware of what exactly his company is capable of and at what capacity. I can design a magnificent product in a technology that my company is not even aware of, there is no use to it. A designer should be completely knowledgeable in the manufacturing processes and principles of his company so that whatever he does is not counter tithe existing mechanism, but only increase its productivity by using it in a better way. Now that we considered the basic aspects of design, let us now look at the design process. Being a designer is not such a simple job, you will have to consider so many discrete and varied things, a small list of which was provided above. There is a lot of trial and error involved. Till you get the right one, there could be thousand rejected designs. Though most of the design process is done by designers, there is a lot of contribution from most of the other departments involved in getting that product out like manufacturing people, engineers, business analysts, managers etc. The following are the basic but general steps I mentioned earlier that any designer follows: Understanding and evaluating Requirements: The very first step involved in design process is to understand and evaluate the user requirements. This involves defining the objectives, setting deadlines, targets and parameters. The design team is involved right from the beginning to the end as they have to understand the business process both from the companys point of view and the customers point of view. The idea being creating an ideal project which will satisfy both the business processes and optimize them. A very important question to answer at this level is why are we creating / modifying this product? Once this question is properly answered, the rest of the process becomes simpler and logical. Research: Research is one of the most important aspects of the whole process. This is an ever going process, especially in the case of longer project. Research typically includes a variety of areas like Technology, Economy, User Satisfaction, Competitor products, trends, risks etc. Every one among them will affect the product and its design. A simple example would be the conversion from analogy to digital. May bee decade or two ago, there were some systems which were still concentrating on analogy devices. Now they are hardly seen. When such drastic change is happening (a decade would not be drastic for us, but for large scale manufacturing plants to change their whole technology from analogy to digital would cost millions of dollars even though it is spread over multiple years), it would be highly beneficial to a company to be well informed beforehand rather than changing at the spur of moment. Research is more concentrated on the customer / user than anything else. Whatever user wants has to be done and it would be much more preferable to know the user choices beforehand doing our own research, rather than getting a dissatisfied comment from him. Planning: planning is another significant part of the whole process. As I mentioned earlier, it is always good for t he designers to know the internal business processes of the company beforehand rather than knowing them later on and trying to modify the design. Planning takes care of this step. With participation from wide variety of areas all across the board, it becomes simpler for the designer to know and understand different views and angles about a manufacturing process so that the overall design is acceptable and enjoyable by everyone. Communication: In a business process, there are always instances when the customer thinks of something, the designer understand something else and the manufacturing guys create something completely different. Why does this happen? Lack of communication. Whose mistake is this? Nobodys. It is very important for a designer and his team to keep inconstant touch with both the customers and the manufacturing guys at the same time. Designers are the only bridge between customers and manufacturers and they should be completely aware of the business process from bot h the sides. It is clients responsibility that he conveys the proper requirements to the designers so that they care-convey them to their manufacturing guys, a small leak here and there can result in disaster. But does the client do this always? NO. So it becomes an additional burden on the designer to keep in constant touch with the customer and keep him posted of what is going on with the product, so that if there is some discrepancy, then the message is obtained instantly, same is the case with manufacturing people. Implementation: the last step is implementation, mostly done by the manufacturing people but involves a little contribution from design team too. First of all, they may have to monitor the whole process and may be even test it thoroughly. Being the only people with complete knowledge of clients business requirements, they are also responsible for quality. This is how a generic design process goes, let me stress the word generic again. Depending on industry, this process may change here and there, but the changes would be nominal. Now let us consider the different contexts of complexity in different industries and their detailed analysis, the major difference between the following topic and the one presented earlier being that the following one is description of complexity from design perspective. Different Contexts of complexity in different industries: I already mentioned while explaining the definition of complexity that its basic meaning may change from industry to industry. In this section let me highlight some key industries and illustrate the meaning of complexity with respect to that particular industry. In the very same process let us also try to combine the design process into the contexts that we can start concentrating more on complexity in design more than complexity in general. Let us start with the software industry where the definition of complexity is very fundamental but very useful. Complexity in Design for Software industries: What exactly does complexity in software design mean? IEEE standard 729gives the following definition for complexity in software, The degree of complication of a system or system component, determined by such factors as the number and intricacy of interfaces, the number and intricacy of conditional branches, the degree of nesting, the types of data structures, and other system characteristics. Though very extensive, this definition still doesnt cover all the aspects ofcompelxity in software. There are many things to be considered while stating complexity in software a few of which are the operating system, programming language , database, interface being used etc. and etc. Now popular question could be, Does all this matter, a complexity has tube related to the way you design an algorithm more than the way you program it?. Actually it does. There is a popular notion of measuring complexity in software industry where in they compare a particular language (for example) with another one and decide what is more complex. Though theoretically perfect, practically this is totally wrong. How can one compare an algorithm written in Java to the same algorithm written in C, their applications and usages are completely different. Similarly you cannot compare a program using Oracle as its database to a program using Microsoft Access. Now can we measure complexity taking all these into consideration? Not really. For measurement purposes again everything falls back to algorithm level. Whatever be the programming basis you are using, underneath it there is only a single algorithm being used. Thus in this context measurmentof complexity has to be done with a lot of risk. Later in the dissertation Aim going to suggest some popular methods of complexity measurements used in software industry. In general, complexity in software comprises mainly of the following components (apart from the algorithm): Component Reuse (so called Object Oriented Programming): This is Avery important component of complexity measurement these days. Given a particular algorithm, if you can reuse a piece of code again and again, thus avoiding redundancy, the complexity would decrease by a lot. Hence this factor is a very important component of software complexity. Control Flow: This takes into consideration the whole control structure of the program. Data Structures: The number of data structures being used and their size (in bits and bytes) Size: the overall length of the code (also including the commented lines and documentation as even they are considered in compilation process) From the above description, we can conclude that software complexity depends a lot on the algorithm being used, but many other factors contribute a lot too. Thus a good designer would first of all consider the algorithm and once the algorithm is decided, he / she would spend more time looking into various other considerations, trying to decrease the length of code, number of hits to the database, number of requests from the server etc. Complexity in Manufacturing: Let me clarify what I mean by Manufacturing before I go any further, it includes almost every single sector of consumer product industry starting from auto industry to small electronic components. Why am I including them of all into a single concept? Because the way they function is almost similar with the difference in size. Thus in this section, I will try to distinguish them whenever necessary, but otherwise they are all the same. Majority of these industries involve many moving parts and each one of these parts are again designed and manufactured, either in the same company of in a different one. Thus there is complexity involved in designing each one of them, and then comes the complexity of assembling them into one single system, normally carried out by various automated and / or manual methods. Consider for example an auto industry. With thousands of components going into the assembly line, the whole process becomes highly complex; similar is a case with electronic devices wherein minute parts has to placed and soldered on a PCB with utmost precision. Normally complexity of a manufacturing process is dependent on many parameters, a brief list of which is as follows: Similarity in processing requirements: the complexity of manufacturing process is highly dependent on the processing requirements and their similarity. Any process would be much simpler when it has similar methods being used across various modules. Thus with variance in processing requirements, the complexity increases. Complexity also increases due to changing consumer demand, which directly affects the whole setup. Yield: Manufacturing yield is another important factor that determines the complexity. There is always a constant effort to increase the yield but without proper planning and automation, this could result in huge complexities. Miniaturization: With the latest trend of miniaturization, all the components are being made as small as possible thus increasing their overall complexity. We can easily say that a laptop or as a matter of fact a palm top is much more complex than a desktop. A similar trend is being observed in many of the electronic sectors and thus enhancing the complexity of design. Energy Efficiency: More applicable in automobile than anywhere else, this parameter is affecting the complexity a lot. With modern vehicles(hybrid electrical and gasoline based engines), the energy efficiency is being increased a lot, but along with it, the complexity is also increasing at a similar rate. Why do we need Complexity Measures? Till now I discussed the basic definitions and detailed meanings of complexity and design. Now let me consider on measurement of complexity. The very first question to be answered in this regard is, Why do we need complexity Measures for? The answer for this question cannot be given in all technical fashion; we need some philosophy for this. As can be seen from the trends in the past two decades, the population is rising at a huge rate and along with it the technology is improving at an exponential rate. We are living in the period where Moores law is still being maintained and the devices that we use daily are being made more and more sophisticated and user friendly. But what if someone wants to understand the concepts behind any of these devices, though the modern communication is fast and very knowledgeable, it is vast too. Most of the information provided is random, not relevant, redundant and sometime inaccurate. This provides more confusion than clarity. As Simon says in his paper Creativity, Innovation, and Quality, Today, complexity is a word that is much in fashion. We have learned very well that many of the systems that we are trying to deal with in our contemporary science and engineering are very complex indeed. They are so complex that it is not obvious that the powerful tricks and procedures that served us for four centuries or more in the development of modern science and engineering will enable us to understand and deal with them. We are learning that we need a science of complex systems, and we are beginning to construct it, it is becoming more and more painful for common men to understand or evaluate systems because of their com plexity. This complexity is increasing day by day rather than taking a downward step. Not only in manufacturing processes but also in other industries like software, electronics, even social, political, religious, medical, biological hectare also vastly affected. The only way out of this confusion is to do proper designing so as to minimize the complexity involved, (note the work minimize. It is impossible to eliminate complexity). Are these the only reasons of measuring complexity? No way. None of the industrialists would ever invest in research for complexity measures for the above mentioned reasons. There is a huge economic advantage by doing proper complexity measurement and then taking proper steps to minimize it. I will mention a small list of these benefits here, and then explain them in detail as we go on to subsequent sections. List of advantages for measuring, evaluating and finally minimizing complexity from financial point of view: The operational strategy could be improved a lot. Processing speed and thus information transfer is much faster and smoother. System performance is better. Increased autonomy. More customer satisfaction and thus higher profit. Easier to maintain, modify or redesign. Statistics involved in Complexity measurement: Before we can go ahead and derive some formulae for complexity measures, it is a good idea to brush up some basic concepts of information theory and other related statistical engineering subjects. So this section is dedicated for a brief overview of some of these important concepts. Ensemble: An ensemble X is a random variable x with a set of possible outcomes, Vex = {v1,v2,..vi, VI), having probabilities {p1,p2,pi,..pie} with P(x=vi)=pi, pi 0 and Conditional Probability: Product rule: Sum rule: Bayes Theorem: Stationary Process: A random process where the various statistical qualities or properties do not vary with time is called a stationary random process. That is for a stationary process, the parameters like Mean, Variance, Standard Deviation etc. are constant across time.(Example White Noise) Erotic Process: Random process in which the time series produced are the same in statistical properties. That is a set of random processes are considered as time shifts of an original stationary process. Entropy: A very popular term in Information Theory, entropy means the lowest amount of bit rate needed for representing a particular symbol. The exact value of Entropy is . It is also called as uncertainty of x. With this definition of Entropy and following the probability rules defined earlier, joint and conditional entropies can be defined as follows: Joint Entropy: Conditional Entropy: This information should be sufficient for us to go ahead and derive our formulae; if anything is needed I will provide it at that instant. Different methods of Complexity Measures, their Evaluation and Analysis: As indicated above, different industries use the term Complexity indifferent aspects, thus there are varied meanings and definitions of it. With so many differences involved in just defining complexity, we can imagine how difficult it would be to measure and find methods to reduce complexity for all these manufacturing units. Taking into consideration this vastness, normally research is done only in those fields where there is some sort of existing mathematical background, using which new complexity measures and evaluations can be done. Ones these are formed, then the same measures could be used for relating complexity of any related industry. A popular area where there is a lot of mathematical background existing is algorithmic complexity, mostly for software related industries but applied in general to a vast area of other industries too. For beginners, let me start with describing few methods in software industry and we shall proceed to manufacturing plants later on. Fan-In Fan-out complexity: One of the most basic complexity formulae to be derived is Fan-Infant-Out complexity formed by Sallie Henry and Dennis kauri. Let us define the following parameters, L = length of the code in lines Fanon = the number of functions that call a particular function Fan-out= how many functions are called by a given function is calledfanout. Then the complexity of the code by this method is given as Complexity = L* (Fanon*Fan-out)2 In overall essence what exactly this formula does is, it counts the number of data counts from a particular unit of code and number of data counts into that unit or into a data structure to measure the complexity. Not so useful in real time applications with millions of lines of code and very complex algorithms. Software Science: This method was started by Maurice H. Halstead. Again this is a very simple and quite useless sort of algorithm to calculate complexity of program code. The formula for complexity that Halstead proposed was as follows: N=n1logn1 + n2logn2 Where N is the implementation length of the code, n1 is the number of unique distinct operators appearing in the implementation, n2 is the number of unique distinct operands appearing in the implementation. Now he defines the program volume as V =N log(n1 + n2) Where log is logarithm to the base of 2. Then he suggests that more the volume of the program code, more complexity is. As I said the above two measures were quite useless for modern programs involving very complicated algorithms. McCabes Cyclamate Complexity: To measure the amount of decision logic that is loops like for loop, while loop etc. or breaks like if, case etc., for a simple software module, we can use McCabes Cyclamate Complexity. An example formula that suggests the implementation of this principle is as follows: CC(G) = NE NN +1 Where CC is cyclamate complexity, NE is the number of edges in a given graph G, NN is the number of nodes in a given graph G and G is the graph. As we have seen much about complexity and its measures in software industry, let us also try to evaluate some measures in manufacturing industry. With the vastness of this field, there was a lot of research done in measuring, evaluation and minimizing complexity. There are some very basic but highly applicable formulae in which complexity could be measure and there are some derivations which involve high level of mathematics and statistics. Let me consider a case in which the measures real simple and then go to higher ones. The following is complexity measure that could be used for small systems with very less intermingling. If I have a simple system with the following information about it, the overall number of functions that the product provide is f the number of parts used in the product are Nap the number of types of parts involved in the product are Not the total number of interfaces involved in connecting these various parts is Ni Then complexity C of the system could be found by C = 1/f * (Nanny)1/3 As we can see from the above formula, it is quite simple but highly useful for a rough measurement of complexity. Now let us see some real complexity measures found in literary that consider modern day manufacturing standards. The following is a complexity measurement scheme proposed by Abhijit V. Deshmukh, Joseph J. Talavage and Moshe M.Barash. The paper was dedicated towards deriving complexity measures for static complexity. Without mentioning the whole derivation process, it is kind of incomplete to just mention the formula so I am trying to explain a small descriptive part of the whole process, not dealing much with the mathematical part. For a detailed derivation, please refer to the paper mentioned in references. Static Complexity and its Measurement: The static complexity as defined earlier is mostly concentrated on the static structure of the whole system, that is the variety of sub-systems, the strengths of interactions etc. There is a lot of chain wise dependencies in this concept like the manufacturing system being dependent on the part flow, in turn part flow being dependent on the type of parts being produced and the type of material handling devicesetc. Before deriving any complexity measure, a predefined set of constraints is specified so as to minimize an erroneous measurement. In this case the following characteristics of static complexity were defined by the authors: Factors resulting in Static Complexity: When there is more than one part type being produced in a single production run, it results in static complexity. Whenever there is a necessity for each part type to do multiple operations, that is a similar, tool being used for producing products from raw materials. Each operation, for a given part type, having multiple machine or processor options. When there is no precedence constraints defined for set of operations that is operations are done more randomly than in a predefined order. Thus every static complexity measure should always be able to capture effect of above mentioned factors and various combinations of them. More over any complexity measure should also follows the following conditions: The value of static complexity should always increase with increase in number of machine parts, part types, machines and the number of operations required to process one part mix. When the sequence flexibility for the parts in the production batch increase, the corresponding static complexity should also increase. When multiple parts are sharing same resources, static complexity should always increase. The complexity should remain constant when the original part mix is split into groups (either two or more.) With these factors and constraints defined, a derivation for the measure of static complexity was done and evaluated. For a detailed description of derivation and evaluation, please refer to Complexity in Manufacturing Systems, Part1: Analysis of Static Complexity byAbhijit V. Deshmukh, Joseph J. Talavage and Moshe M. Brash. But in short the overall equation of static complexity can be written as (Derived from [] , a brief overview of this derivation is presented in the next section, Dynamic Complexity) Where M is the number of subsystems, and N is the number of mutually exclusive states of the system with probabilities pig. As can be seen, the overall equation is merely a joint entropy calculation of these two combinations. It can be also mentioned in a much broader sense as Which takes into consideration, Systems, subsystem, part types and processing times. Thus with the derived model, author rather surprisingly summarizes that with increase in static complexity, the overall system performance increases. Derivation and Evaluation of Dynamic Complexity: In their paper Measuring complexity as an aid to developing operational strategy, authors G. Frizzle and E. Woodcock define and derive formulae for measuring both dynamic and static complexities. Taking into consideration the complexity measures in algorithm design, they derived the formulae for manufacturing process. As the whole concept is based upon Information theory and Network theory, it would be a good idea for the reader to go through those concepts before following their derivation. In the following section, I will state the formula and give a small illustration of its application in industrial process, a brief overview of derivation is also considered but with the assumption that the reader is an expert in the above mentioned fields, no effort was made to define some of the properties used as they are out of reach for the present topic. If we can consider the manufacturing process as a system with some inputs and an output, and if we can consider the number of items present at any point of time as the states of the system, then the whole system can be viewed in the form of a queue. If the process is considered stationary for simplicity purposes, then the average of inputs should be equal to the output. Let us imagine that the maximum capacity of the system is and the current usage is (where ), then because of the erotic property ( definition of erotic process is repeated here for convenience: Random process in which the time series produced are the same in statistical properties. That is asset of random processes are considered as time shifts of an original stationary process. ), the probability that the process is completely occupied is / . Now if we can get all these equations into a GD(Geometric Distribution), and evaluate the results for maximum entropy, then we get the maximum entropy for the process when t he probability of items being present at an instant of time is (Equation 1) Now anyone aware of network theory and probability theory, can recognize the above equation as the formula for simple queue with Poisson arrivals and with an arrival rate of and an exponentially distributed service rate of .. With this probability the entropy of the system could be derived to be (Equation 2) (The derivation of the above equation is simple substitution of the above mentioned probabilities into the entropy equation defined earlier.) The logarithms are taken for base two for all the generic purposes(that is the states are considered to be either off or on at any point of time and no transitions are allowed. This is similar to the most information theory derivations where the bits are considered to be either 0s or 1s.) the above two equations carry a lot of weight and meaning with them and I will cover them later on, but first of all lotus try to complete the derivation process without breaking the flow. Now if we can consider the above mentioned entropy as the upper limit, we can split into two parts, one for the tolerated states and one for on-tolerated states. Now if we can also split the programmable parts from the non-programmable parts (considered to be Bernoulli type processes), then we can derive the following equations, Where is the overall entropy of the system is, is the entropy of the tolerated states and is the entropy for non-tolerated states. If we can consider P as the probability of the system under consideration, then We can replace with the following equation and Where Pp. are the probabilities of queues for varying length, pm are the probabilities of having a queue of one or zero, pub are the probabilities of Bernoulli states Thus getting the final equation for entropy as (equation 3). Refer paper [] for more detailed overview of derivation process. The above formula represents the dynamic probability of the whole system. Now the afore mentioned formula for static probability can be derived from equation 3, by substituting P=0, pub =0, pp.=0 and take a limit of infinity on time. This gives the equation for static complexity as Effects of Complexity Measures on Production activities: Now that we explained some measures for complexity, let us also try to describe their effects on production environment and operational strategies. Let us start with equation 1 and equation 2 and try to find out what exactly they signify. The equations are mentioned here for users convenience, (Equation 1) (Equation 2) Though not really exactly a complexity measure, the above two equation signify a lot to us. The very fact that equation 1, which is an exact representation of a Poisson based queue, which is used a lot in network theory for representing Markov Processes, means a lot. The whole production system can be now seen as a Poisson queue or as a Markov Process with finite arrival rates and service rates. Now considering the upper limit of complexity, defined in equation 2, we can find out that as approaches , that is as the request rate is coming closer to service rate, the complexity of the overall process increases. This Isa very good explanation of higher complexities in busier environments. Thus it forms a very good basis for knowing what service rate is optimum for controlling complexity and thus finding a good operational strategy. Just considering the upper limits of entropy gave us so much information about the complexity of the overall system, so we can now imagine how much more information can be obtained from going into intricacies of the above equation. A simple example would be to consider both the static and dynamic complexities individually or as a combination for evaluating the operational strategy. If we consider the tolerated states and non-tolerated states, it can be seen that by increasing the control over the system (that is by increasing the value of P which represents the probability of the overall system), the value of entropy thus the value of complexity decrease. But this does not impact the non-programmable states of the system; that is, instead of trying to remove the complexity, we are just taking it to a state where we are in control of the system. Indirectly speaking it is something like having a complex system, but we exactly know what it is going to-do at what point of time. To minimize the complexity, we can take a different approach wherein we are going to directly working on the non-tolerable states too. Both programmable and non-programmable states can be controlled using proper tools so that the overall system complexity is reduced. The above discussion was only in the field of manufacturing, now if we consider the software industry too, complexity is directly related tithe performance, stability, reliability, maintainability and other abilities of the system. The higher the complexity, the greater is the chance of finding bugs in both performance and functionality. The only big problem with software industry is that there is no proper mechanism for finding complexity of a code. The methods which I mentioned could be used, but they are not really so through. If I was a software manager, I wouldnt really worry about the complexity measures given by some of these tools, no one would. Now that we have seen some major ways in which complexity could be used in deciding operational strategy, let us also consider methods in which complexity could be reduced in manufacturing process so as to obtain better results. Methods of controlling complexity: In the above discussion, I mentioned some means by which complexity could be controlled or minimized. Let me state some more means in a more detailed tone. In sections covering introduction and definitions of complexity, I mentioned again and again about the causes of complexity and the means in which complexity sneaks into the system. Even though the technology we use for manufacturing process is the biggest cause, there are many minute things which if we control and doing a proper fashion; a large part of complexity could be eliminated. There is also some research done in this regard by many researchers to find the means by which project complexity could be minimized thus leading towards a faster performance and greater quality. The following are some of the key points taken from these research papers: Formalization: Though directly unrelated to the complexity of a project, formalizing a project always helps in getting a better product. The more formalized a product is, the less is the probability of small mistakes. Improved Communication: Communication between team members from beginning to the end of the product is one of the most important concepts of minimizing design and manufacturing complexities. By improving communication and interaction, there is not only a chance of improving the design on the first place (as more heads are involved)but also the misconception and misrepresentations of the design process are totally eliminated, thus minimizing or completely removing the probability of complexity due to human understanding. Leadership Style: there is always a requirement for a strong leader to lead the group and communicate its developments to both the higher management and also the customers. A strong leadership always changes the course of product design and delivery . In their paper Project complexity and Efforts to Reduce product Development cycle time, authors Thomas B. Clift and Mark B. Vandenbosh, suggest two propositions about strong leadership. (I) The shorter the cycle time, the greater the requirement for an authoritarian leadership style and(ii) The more complex the project, the greater the requirement for participative leadership style Customer Involvement: one of the main causes of increase in complexity is irregular communication with customer. After all customers are the final users of the product and it is also their responsibility to mention what exactly they expect from a product. If it is a general product applicable to wide section of community, then it is the responsibility of the design team to do research and find out the customer requirements and covey them to the manufacturing team. The above are some non-technical, management oriented means of reducing complexity, are there any technical design oriented methods? There definitely are, some of which are discussed in the earlier sections. The process is simple, take a complexity measure applicable to your industry and try to minimize it using various means. A more research oriented approach is given in some of the literary papers attached in the references section. After considering the means of minimizing complexity, it would be great idea to consider a couple of case studies where complexity measures were used to change design process. As explained earlier, I am going to concentrate more on manufacturing fields rather than software or other industries as that is where most of the research on complexity and its applications are seen. Case Studies: Almost every single paper about complexity measures lists about three case studies where in the complexity of a system was reduced by the application of their measure and changing the operational strategy accordingly. In the following few lines, I will try to enlist case studies from varied areas so that a much broader outlook is obtained. Case Study 1: Let me first consider a very interesting case study described in Measuring complexity as an aid to developing operational strategy by. Frizzle and E. Woodcock, a brief description of their measurement process was given in Dynamic complexity section. They applied their measure on a machine shop consisting of 35 processes, 59 machines and an overall part number of 350 (kind of a small to medium scaled industry, which is the usual case). In their measurements of complexity, they found out that the overall static complexity was around 96.4 pep, where pep is a unit they use for measuring complexity. pep stands for equivalent product process. The dynamic complexity is around 160 pep, with programmable part contributing 78.4epp. The interesting part to note is that dynamic complexity is very high compared to static complexity, which suggests major flaw in the design and implementation process. When they evaluated the complexities coming from various sources, they found out that the programmable states like volatile mix, batching etccontributed a lot to this spike in the complexity, thus creating bottleneck. There were also issues found with queue stability. Thus fixing these, the dynamic complexity was minimized by a great deal and the process was improved a lot. There are two more very interesting trials mentioned in the same paper, which the user may find useful to go through. Case Study 2: Another very though provoking study of design is given in Harnessing Complexity in Design by Timothy T. Maxwell and M. Manic. Though they dont deal with complexity measures as such, they use many means by which the design process could be modified so as to produce a better design with lesser complexity. As I mentioned some of these measures in the discussion earlier, I thought it would be more interesting to illustrate their application in reality, thus the study of this case. In this paper, they applied their principles in designing a Fuel cell powered sport utility vehicle (SUV), a project done in Texas Tech University (TTU). In most of the case study, a detailed description of design process like team effort, organizing and formalizing the team, constant communication, understanding of basic objectives, requirements, constraints, etc. were discussed. After these basics were covered and put in place, a detailed design perspective was presented, with various systems, sub-systems, their hierarchical nature and interactions. These interactions and hierarchies, if properly designed and developed, would minimize the complexity by a lot. That is what was exactly done thus harnessing the complexity. Though very simple and fundamental, this discussion provides a deep insight into how complexity could be harnessed without involvement of high level mathematics and I strongly suggest the reader to go through the whole discussion. Conclusion: In the above discussion, I presented what the exact definitions of complexity and design are along with a detailed description of their inner meanings applied to various industries. Along with that, there were a host of complexity measures and evaluations presented applicable to a wide variety of industrial applications. Using these schemes to measure complexity and thus obtaining a means of modifying operational strategy is a very useful but tricky process. The user should go through multiple ways of measuring complexity and exactly evaluate the process that best fits his / her industry. Along with measuring complexity, it is also very useful to go through some organizational changes which not only improve the design and development process, but also boost the morale of manufactures and also customers thus leading towards a more successful and profitable product. Future Research: As can be seen from the above discussion, complexity measures are Avery important tool in deciding the strategy of operation and evaluate the fitness of a design / development process. But there are very few fields in which a perfect working complexity measure is found. It would be highly useful for industrialists to sponsor a wide variety of research activities in various fields of manufacturing and designing to find the inlaid complexities and measure them. In this section I will try to list of few of those fields where in complexity measures could be researched and formed. Generalization: The research found till now, is done specific to a particular industrial standard. It is always useful to have some measure that can be freely applied to wide varieties of fields. Thus generalizing the existing measures to multiple areas would be a great thing to accomplish. Complexity involved in Supply Chain: The measures till now are mostly applied to various parts involved in the design and manufacturing mechanism, but no one considers the parts that are obtained from third-party vendors as a different set of entities. A simple example would be auto industry where one of the most important parts of the whole design process is supply chain management. I believe that there should be complexity measures for these processes too and they should be integrated to complexity measure of the overall mechanism. Complexity for Software: there is a ton of research going on in the field of software engineering to define and evaluate the complexity of programs. I think generalization is also strongly needed in this field. A particular tool used for measuring complexity of a code written intone language is not applicable to other language, or gives a complexity erroneous result. This should be eliminated. References: [1] Axelrod, Robert and Michael Cohen, 1999, Harnessing Complexity, The Free Press, New York. [2] Simon, A. H., 1999, The Sciences of the Artificial, Third Edition, The MIT Press, Cambridge, Massachusetts. [3] Pesky, P. E., 1997, Creativity, Innovation, and Quality, ASQ Quality Press, Milwaukee, Wisconsin. Simon, A. H., 2000. [4] Warfield, J. N., 1994, A Science of Generic Design: Managing Complexity Through Systems Design, Iowa State University Press, Ames, Iowa. [5] Timothy T. Maxwell, M. M. Tank, Harnessing Complexity in Design. 2002 Society for Design and Process Science [6] Jeff Tina, Marvin V. Zelkowitz, Complexity Measure Evaluation and Selection IEEE Transactions on Software Engineering. 1995. [7] Michael Goldwasser, Jean-Claude Lacombe, Rajeev Motswana, Complexity Measures for Assembly Sequences. IEEE International conference on Robotics and Automation. 1996 [8] Thomas B. Clift, Mark B. Vandenbosh, Project Complexity and Efforts to Reduce Product Development cycle Time [9] Morgan Swank, Dingdong Zing, NPD Complexity and Technology Novelty as Antecedents of Design- manufacturing Integration: Effects of Product Design quality [10] Mike Hob day, Product complexity, innovation and industrial organization [11] Abhijit V. Deshmukh, Joseph J Talavage, Moshe M. Brash, Complexity in manufacturing systems Part1: Analysis of Static complexity [12] G. Frizzle, Woodcock, Measuring complexity as an aid to developing operational Strategy

Sunday, December 22, 2019

Jonathon Swift’s Gulliver’s Travels Essay - 2951 Words

Humankind as the Balance of Rationality and Passion â€Å"A Voyage to the Country of the Houyhnhnms† Jonathon Swift’s Gulliver’s Travels takes place in four parts, each of which describe Gulliver’s adventures with fantastical species of foreign nations. The search for Swift’s meaning has been a controversial one; the novel has been interpreted along a wide spectrum ranging from children’s story to a satire of human nature. The greatest debate lies within the realm of satire, and Part Four of Gulliver’s Travels, â€Å"A Voyage to the Country of the Houyhnhnms,† is just one area in which critics argue for a variety of satirical meanings. Critics traditionally argue for the â€Å"hard† interpretation which posits the strictly rational nature of the†¦show more content†¦(501) There are many instances in which the Houyhnhnms are depicted in a positive light, and one such case is Gulliver’s revelation that â€Å"The Houyhnhnms have no world in their language to express anything that is evil, except what they borrow from the deformities or ill qualities of the Yahoos† (Swift 2413). In addition, Gulliver reflects upon periods where he was in the company of conversing Houyhnhnms as he expresses: Nothing passed but what was useful, expressed in the fewest and most significant words; where (as I have already said) the greatest decency was observed†¦where no person spoke without being pleased himself, and pleasing his companions; where there was no interruption, tediousness, heat, or difference of sentiments. (Swift 2414) A sense of emotional resiliency is also expressed as Gulliver describes the circumstances surrounding the death of a fellow Houyhnhnm. He writes, â€Å"They†¦are buried in the obscurest places that can be found, their friends and relations expression neither joy nor grief at their departure; nor does the dying person discover the least regret that he is leaving the world† (Swift 2413). While these are all characteristics that can be seen in a virtuousShow MoreRelatedThe Downfall And Vice As A Tale Of A Tub And The Battle Of The World Essay2058 Words   |  9 PagesThe prominence of Jonathon Swift and his work is undeniable in the Eighteenth Century. Swift’s emergence into the literary world was spurned on by writing about politics and religion with his strong opinions and wit. Other famous works by Swift include A Tale of a Tub and The Battle of the Books based on the corruptions in religion and learning at the time. Swift’s works in literature were often written to further a cause or reaction. The idea of the antagonising satirist is reiterated in a conversationRead More Comparing Platos Republic and Gullivers Travels Essay838 Words   |  4 PagesPlatos Republic and Gullivers Travels      Ã‚  Ã‚   In The Republic, Plato attempts to define the ideal state as it relates to the tripartite division of the soul. In this division, wisdom, the rational characteristic of the soul, is the most valuable and important. In the ideal state the ruling class would be the guardians, those who maintain rationality and will operate according to wisdom. Each individual should be put to use for which nature intended them, one to one work, and then every manRead More Utopia in Gulliver Travels and Paradise Lost Essay2460 Words   |  10 PagesThe Inconceivable Utopia in Gulliver Travels and Paradise Lost  Ã‚      In Jonathon Swifts Gulliver Travels and in John Miltons Paradise Lost, the reader is presented with two lands representing utopias. For Swift this land is an island inhabited by horse like creatures called Houyhnhnms who rule over man like beasts called Yahoos. For Milton, the Garden of Eden before the Fall of man represents Paradise. In it, Adam and Eve are pure and innocent, untested and faithful to God. The American HeritageRead MoreThe Shock Factor of A Modest Proposal by by Johnathan Swift 789 Words   |  3 Pagesseriously and the blatant sometimes over-the-top sarcasm occasionally used, several parts of it would cause an uproar and quite possibly a revolution if implementation were ever attempted, and there was even a hint that enforcement of it was to occur. Jonathon Swift was born on the 30th of November 1667 in Dublin, Ireland and died on the 19th of October 1745 in the same (Johnathon Swift). He father died before he was born and his mother had a hard time supporting him on her own. She ended up giving himRead MoreWhat Divided Whigs and Tories in the Reigns of William Iii and Queen Anne (1688-1714)?2936 Words   |  12 Pagessacrifice that simply had to be made to ensure future stability through Protestant dominance not just in Britain, but throughout mainland Europe[20]. Tory attitude to William’s wars are perhaps best encapsulated in Tory writer Jonathon Swift’s satirical classic â€Å"Gulliver’s Travels†: â€Å" He wondered to hear me talk of such chargeable and extensive wars; that, certainly we must be a quarrelsome people, or live among very bad neighbours†¦He asked what business we had out of our own islands, unless upon the

Saturday, December 14, 2019

Tyjrtjr Free Essays

string(69) " so large that it’s unlikely to have occurred by chance alone\." Laboratory Class Eight: Brain and Behavior 2: Basic Unromantic and Function. Laboratory Class Nine: Revision Laboratory. References Inspirational Readings. We will write a custom essay sample on Tyjrtjr or any similar topic only for you Order Now All research or teaching using people at the University of Auckland requires approval of the University of Auckland Human Participants Ethics Committee. We have chosen the exercises carefully in order to provide you with what we hope will be an informative learning experience. However, if you are uncomfortable with any exercises we strongly encourage you to contact your tutor and ask to be excused from participation. It is much better if you are able to do this before the lab is underway. It is therefore recommended you read the manual to find out what is coming up before each lab and decide if you think any of the exercises may be distressing to you. If an exercise becomes distressing or uncomfortable for you during the lab, you are still able to be excused. Please be aware that you will only be excused from the specific exercise of concern, not the entire lab. Please also be aware that you will not be able to be excused from parts of a lab AFTER it has taken place on these grounds; you must see your tutor before or during the lab. For Ethical concerns contact: The Chair, The University of Auckland Human Participants Ethics Committee, The University of Auckland, Private Bag 92019, Auckland. Tell: 373 7699 ext. 87830. Completing Laboratory Reports Introduction The laboratory reports for PSYCH 109 can count towards 20% of your final mark. Therefore, students are strongly advised to put significant effort into gaining good marks for their reports. When preparing reports, there are a number of things students should know. This section of the laboratory manual is written so that the appropriate information is available to all students. The various areas of psychology taught in PSYCH 109 have a long history of research. An essential component of scientific communication is the requirement of conciseness and parsimony. This means that when communicating experimental outcomes and conclusions (such as from an international research project or an introductory level laboratory in psychology) it is very important to write in precise was observed, should be given. However, oversimplification is not an acceptable course of action. Explanations need to account for what was observed: no more, no less. General considerations for Laboratory Reports ; Never exceed the page limit that is prescribed for an assignment, You will be able o answer questions adequately within the space limit. ; Ensure that you use appropriate grammar correct and spelling. Try to write clearly. Never assume that the marker knows what you mean. Remember that a marker can only evaluate what you have actually written – not what you meant to say in your answer. Plan how you are going to write your answers. Do not simply write the first thing that comes into your head. Write a draft answer that you can edit and revise before writing your final answer. Try and use short sentences. Two short sentences are usually better than a long one. Ideas can be stated more concisely in shorter sentences. Often, long sentences end up being ambiguous. ; Remember to proof-read your work carefully before submitting your report. Sometimes it is a good idea to ask a friend who is not enrolled in 109 to proof-read your work and check for clarity. If this person does not understand your answer, it is likely that the marker will also struggle to follow it. If it is discovered that two or more Laboratory Reports are exactly the same, the concerned parties will be subject to disciplinary action. Plagiarism of any kind is not permitted. General requirements To help you write laboratory reports that will reward your effort with good marks, he following list of important points has been prepared. If you want to attain high marks you will need to incorporate the elements in th is list into your written work for these papers. Constructing graphs experience of drawing graphs before and a few of you will have your own ideas of how a graph should be drawn. These ideas may come from what you were taught at school or from the way you were instructed to draw graphs in other departments. Different scientific disciplines have their own codes of practice and communication. This is because the most concise mode of communication for one rear of science may not (and usually is not) the most concise mode for another area. Psychology is a science that follows the codes of practice and communication set down by the American Psychological Association (PAP), and the PAP has produced a set of guidelines for the presentation of graphs from psychological research. According to PAP guidelines, there are strict rules for drawing graphs. In this Laboratory manual, however, when graphs are required, the emphasis will be more on how to interpret the graphs produced during the experiments. However, graphs must be legible and neat, and must follow the general guidelines below. General considerations for graph drawing Graphs should always be drawn within the space provided in the manual. It is a good idea to draw a preparatory graph on separate paper (graph paper will help you here) so that you can make a neat, correct copy in the space provided. Graphs should be made as large as possible without causing cramping or squashing. All graphs should be drawn in pen (never pencil) and only one color is permitted -? preferably blue. All straight lines from which a graph is constructed must be drawn using a ruler. All errors need to be corrected either by redrawing the graph or, for a very small error, by neatly whiting out the error. Statistical Analysis in the Social Sciences Significant Differences In psychology, we are often faced with the question of whether or not the difference we see in two groups of data is statistically significant. A significant difference observed in the data is one that is so large that it’s unlikely to have occurred by chance alone. You read "Tyjrtjr" in category "Papers" For example, we may be interested in knowing if students perform better in an examination under one condition than another – say, sitting an examination in a well-lit room as opposed too dimly-lit room. We could randomly mom, have them sit the examination in their allocated room, and then compare the two group’s examination results. There will always be a difference between the groups’ average results and there are two possible explanations for this difference: 1 . Non-significant Difference The observed difference could solely be due to which students happened to be allocated to which room, I. E. , could be Just due to chance alone and nothing else. OR 2. Significant Difference The observed difference is sufficiently large that we simply don’t believe that it’s likely to have occurred by chance alone but that the level of lighting in the room is also avian an effect on each group’s results, I. E. , this difference is so large that it is unlikely to occur when nothing else (apart from the ‘chance’ effect) is ‘going on’. Significance Tests and the p-value Sometimes the difference between two groups of data is really so large that, maybe with the aid of a plot, we can easily conclude that it is a significant difference. On most occasions though, it is not so clear cut and in order to objectively decide whether a difference is significant or non-significant we must perform a significance test. When we conduct a significance test, the most important value produced in the output is the p-value. The p-value is a probability, a value between O and 1, and it answers a question about the data: e. G. , â€Å"How likely is it, I. E. , what are the chances, I. . , what is the probability, that a difference this big, or bigger, would have been observed in the data if there really were nothing going on? † Interpreting the p-value Small p-values 0 a significant test result Large p-values 0 a non-significant test result If the p-value is small (less than 0. 05) then it is saying that less than 5% (0. 05) of the time (hardly ever) would we observe a difference(s) as big as this (or bigger) when toting apart from chance is contributing to it – it would be highly unlikely to get a difference(s) this big by chance alone. We say the observed difference is significant at the 5% level’. There are a large number of significance (hypothesis) tests available to use depending on the situation under study but in this course we will look at only one test, the Independent samples t-test. (Non-assessed laboratory class). Learning Objectives After completing this laboratory students should: 1. Understand the assessment requirements, requirements for pleasure, attendance acquirement, and assignment requirements for Psych 109. 2. Understand the hand-in dates for the two laboratory reports for Psych 109. 3. Understand the penalties for handing in late work; and the cut-off dates for accepting late assignments for Psych 109. . Understand where to hand in late laboratory reports for Psych 109. 5. Know the date and time of the terms test for Psych 109. 6. Understand what plagiarism is, and understand the consequences of plagiarism or other forms of cheating. 7. Understand the correct procedure to follow for raising individual concerns or course criticisms regarding Psych 109. 8. U nderstand that a Psych 109 student must attend their scheduled laboratory stream in the weeks that laboratories are scheduled, and that they must ensure that their tutor correctly records their attendance at laboratories. 9. Understand the procedure to follow if the scheduled Psych 109 lab cannot be attended. 10. Understand GAP requirements for undergraduate Psychology courses. Thinking. (Assessed laboratory classes). Lecturers: Associate Professor Tony Lambert (author of lab class). Associate Professor Doug Life (author of Research Methods lectures). After completing this laboratory students should 1. Understand the distinction between an independent groups research design and a repeated measures research design. 2. Be able to use a histogram in order to explore and evaluate the variability in set(s) of scores. . Be able to calculate the standard deviation of a set of scores using SPAS. 4. Be able to perform a t test in order to compare two experimental conditions. 5. Understand the statistical nature of inferences based on the outcome off t test. 6. Gain an appreciation of the complex issues that may be encountered in considering possible relationships between experimental evidence and theoretical conc lusions. 7. Be able to think critically about the relationship between experimental evidence, psychological theory and everyday behavior. Do men and women think differently? If so, to what extent and in what ways does the thinking of women differ from that of men? Judging from the enormous popularity of publications such as Men are from Mars, Women are from Venus; it seems that almost everyone has at least some interest in this question. In addition to popular publications of the Venus and Mars ilk, a substantial amount of serious science has been directed at answering this question. It will come as no surprise to discover that his work is controversial. Controversy over research into sex differences in thinking is apparent at several levels. There has been disagreement concerning the reliability of the findings: Sex differences have been reported in a number of published studies, but not all these findings been replicated successfully by other researchers. Therefore, questions remain concerning the reliability of results in this area. In addition to the question of empirical reliability, there is the rather thornier question of what the experimental question. For example there is of course the perennial nature-nurture issue. So if e find, for example, that men and women differ in their verbal and spatial skills, is this due to environmental factors arising from different childhood experiences and child-rearing practices for boys and girls; or is it due to innate factors, related to biological and relatively immutable differences in brain structure and function for men and women? In addition to this rather baldly stated dichotomy between nature and nurture, a third state of affairs is possible – that both nature and nurture contribute, and that biological factors interact with learning and experience in complex ways during childhood. One might also wish to consider the size of an experimental effect – although men and women may differ as a group on a particular cognitive task, there will also be considerable overlap in the scores. Clearly, the degree of overlap between the cognitive performance of men and women will have a bearing on the conclusions that can be drawn. The research findings of Hilary et al. (2005) Hilary et al. (2005; Behavioral Neuroscience, 1 19, 104-117) asked 42 men and 42 women to perform a variety of verbal and spatial tasks. Blood samples were also taken, so that measures of circulating hormones, especially estrogen and storefront, could be measured. This was done because one aim of their study was to discover whether there is any relation between hormone levels and performance on cognitive tasks. There were three main findings: (1) Females performed better than males on a verbal fluency task; (2) Males performed better than females on a spatial task involving mental rotation; (3) There were no clear relationships between hormone levels and performance on any of the cognitive tasks. In the laboratory exercise we will attempt to replicate the first two findings of Hilary et al. (2005). Obviously, it is impractical to look at their hormonal findings in PSYCH 109 – and even if we could, attempting to replicate their ‘null result’ may not tell us very much. ) Our study, and that of Hilary et al. (2005) make use of an independent groups research design (also known as a between subjects research design). As you will remember from the recent Research Methods lectures, an independent groups (between subjects) design involves comparing different groups of individuals. In this case, our independent variable (V) is sex , because the experiment involves comparing men and women with respect to scores on verbal and spatial tasks. Other examples of independent groups designs might involve comparing extravert’s with introverts (independent variable is personality), or five year olds with seven year olds (V) is age), or left hander’s with right hander’s (IV is handedness), or anxious with non- anxious individuals (IV is anxiety), and so on. An alternative, and equally popular approach is to use a repeated measures research design (also known as a within subjects research design). In a repeated measures (within subjects) experiment the same individuals are tested repeatedly in two or more experimental conditions. An example of this kind of design could involve comparing the driving behavior (using a simulator! ) of individuals before and after consuming varying amounts of alcohol (IV is alcohol dosage). Another example could involve asking individuals to employ and then comparing their performance under these different instructional conditions (IV is memory strategy). Each kind of design (I. E. Repeated measures and independent groups) has advantages and disadvantages which render them useful for research in different kinds of situation. One advantage of the repeated measures sign is that it is often more sensitive than an independent groups design. This is because each person is being compared with themselves under different experimental conditions. A disadvantage of repeated measures designs is that the results can be contaminated by practice and/or fatigue effects. A common strategy for eliminating or minimizing this problem is to counterbalance the order of performing in the different experimental conditions. For example, in the driving and alcohol example Just mentioned, half the participants might perform the driving task in the alcohol condition first followed (several days later! By the no alcohol condition; the other half would participate in the two experimental conditions in the reverse order. Independent groups is of course the appropriate design in any situation where the research question is related to individual differences, such as personality or handedness. Independent groups designs are also often used in the clinical trials of medical researchers, where the effectiveness of one treatment is compared with that of another. Hence, our experiment will employ an independent groups research design with sex (female vs.. Male) as the independent variable. The experiment will have two pendent variables: scores on a verbal fluency task and scores on a mental rotation task. As you will remember from Research Methods lectures dependent variable(s) are the quantities or factors that are being assessed to see whether they might be related to (I. . Dependent upon) changes in the independent variable. How to carry out the experiment As mentioned earlier, our aim is to try and replicate the findings of Origin Hilary and her colleagues published in the Journal Behavioral Neuroscience (Hilary et al. , 2005). To do this, each student participant will need to carry out a mental rotation task and verbal fluency task. All participants will perform the menta l rotation task first followed by the verbal fluency task. Figure 1. In the mental rotation task (see text) participants must decide whether pairs of shapes, such as those shown in A, B and C are identical or different. Mental rotation task Look at the top pair of pictures (A) shown in Figure 1. Are the shapes shown in the pictures exactly the same, or are they different? How did you arrive at your answer? Most people report that they solve this problem by imagining rotating the left hand shape clock-wise (or the right hand shape anti-clockwise), you ay be able to ‘see’ in your mind’s eye, that the two shapes are exactly the same. Now, decide whether the pairs shown in (B) and (C) are also the same. By using the same strategy, you might be able to ‘see’ that the shapes in B are also identical, but the shapes in C are different – and remain different, whichever way you rotate them in your imagination. The drawings shown in Figure 1 are similar to those used by Roger Sheppard and Jacqueline Metzger in a classic study published in the Journal Science in 1971. Sheppard and Metzger found that the time taken to make a decision in this mental taxation task increases systematically as the angular disparity between the two drawn objects increases. These findings attracted great interest at the time, and continue to attract interest nearly four decades later. One reason for this enduring fascination is that Sheppard and Mà ©tier’s findings showed that a mental phenomenon such imagination, which appears at first glance to be irredeemably private, subjective, and unobservable (by anyone else, aside from the person doing the imagining) can nevertheless be studied scientifically. Furthermore, their findings showed that one aspect of imagination, the mental rotation process, appears to operate in a highly systematic and lawful way. In the version of the mental rotation task to be used for this laboratory exercise, you will be presented with pairs of line drawings representing AD shapes, and will be asked to decide whether the two shapes are the same or not. As in the examples shown in Figure 1, the shapes will be presented at varying orientations. On trials where the correct response is ‘different’ the two shapes are usually mirror images of each other. These features of the task make it relatively difficult! Do not be concerned if you make errors when you carry out this task. The dependent variable or this part of our experiment is percent correct; clearly the experiment would fail if everyone was able to perform the task with 100% accuracy! How to cite Tyjrtjr, Papers

Friday, December 6, 2019

Effective Crisis Communication Crisis to Opportunity

Question: Discuss about the Effective Crisis Communication for Crisis to Opportunity. Answer: Introduction: The effective leadership skills help in developing and structuring the organisational behaviour and values. The leaders are the major responsible people of structuring the organisational culture and accomplishing the business goals (Hackman and Johnson 2013). The following communicational factors are needed to be taken into considerations. Each of the members of a group needs to take equal participation. The efficient leaders accordingly require establishing the proper communicational process and interact about the different priorities (Men and Stacks 2014). The efficient leaders need to communicate about the emerging issues or conflicts that may affect the internal functionalities. It is the major responsibility of the leaders to communicate with the group members and make them aware of the organisational values and morale. The leaders establish the communicational flexibility among the people from diverse background. With the help of frequent group meetings and discussions, the communication process is strengthened. The leaders need to discuss about the innovative process that helps in developing the professional and personal attributes of the employees (Ulmer, Sellnow and Seeger 2013). Moreover, the leaders develop the effective interpersonal skills through establishing the proper communicational process. The leaders need to communicate about the business ethics. The effective group discussion helps the group members to be knowledgeable about the business ethics that are needed to be maintained. The leaders take the responsibilities to communicate about the workplace etiquettes that help in shaping the values and morale. Maintaining the communication transparency is necessary to generate the sense of reliability among the associated (Stevens 2014). When the leaders are maintaining the proper communications with the employees, it helps in motivating and making them comfortable to perform in a better way. The efficient leaders undertake the proper form of communication transparency to eliminate the emerging conflicts within the organisational scenario. Therefore, it can be considered that the efficient leaders need to undertake the proper communication process for shaping the values and group roles. The written explanation in this resource is highlighting the importance of message clarity in establishing the proper communication. The introduction is accurately explaining the subject matter briefly. The discussion part begins with the explanation of the underlying concept. Explanation is quite helpful to derive intact knowledge about the necessity of message clarity. However, it is needed to explain more on how the leaders apply the method of message clarity. Apart from this limitation, the overall context is understandable. In addition to this, the concept is highlighting the key points of the subject matter, which is described in this report. The resource is reflecting the insightful ideas about the necessities of effective body language for communicating with the organisational associates. The introductory part is missing in this resource. It starts with the general description, which can be considered as the limitation of the study. However, the discussion presents the uses of the non-verbal communications in the historic ages. The examples are properly described in the resource. The description is highlighting the method of recognising the body languages. The use of numeric data is clarifying the concept more specifically. However, the study does not present the use of the proper body language in the organisational context. Apart from this, the presentation of the sequential steps of using such type of communication is quite remarkable. Utilisation of the complex words may sometimes confuse the readers. However, overall, it is an intact paper of providing the enriched knowledge about non-verbal communication. References Hackman, M.Z. and Johnson, C.E., 2013.Leadership: A communication perspective. Waveland Press. Ulmer, R.R., Sellnow, T.L. and Seeger, M.W., 2013.Effective crisis communication: Moving from crisis to opportunity. Sage Publications. Men, L.R. and Stacks, D., 2014. The effects of authentic leadership on strategic internal communication and employee-organization relationships.Journal of Public Relations Research,26(4), pp.301-324. Stevens, L., 2014. Improving Teamwork, Staffing Adequacy, and Transparency to Reach High Reliability.Nurse Leader,12(6), pp.53-58.