Introduction

Introduction

SaaS development model

Li Yu

introduction

The real question is not whether computers have the ability to think, but whether humans have this ability

________B.F. Skinner, Computer Science

SaaS model is different from traditional software not only in operational services, but also in software development methods and technologies.

How to develop SaaS software and what technologies will be used to develop SaaS software are the main contents of our research.

Key Technologies for Realizing SaaS Software

SOA technology

call SOA and SaaS twin sisters. SOA and SaaS are two carriages in the field of modern software services. They are running fast and keeping pace .

Service-Oriented Architecture ( SOA ) was first proposed by Garnter in the late 1990s , emphasizing the importance of services. Most domestic consumers gradually know and understand it through the propaganda of IBM , the leader in the SOA field .

With the passage of time, application software developers have become more and more involved in the field of SOA , and it is no exaggeration to say that SOA has become ubiquitous. As SaaS becomes more and more popular and SOA continues to deepen, in December 2007 , Microsoft took the lead in proposing the “software + service” ( S+S ) strategy in the industry, aiming to get through the “internal business integration ( SOA ) + external business development ( SaaS ) + rich user experience” and other multiple resources, organically combine “software” and “services” to maximize the value of IT , and achieve “have both the cake and the paw” of SaaS and SOA .

According to the definition made by Microsoft in a technical white paper, “software + service” is an ” IT umbrella”, which integrates many existing IT technologies and theories, including SaaS , SOA and Web2.0 . With different manufacturers entering from different entry points, the entire IT industry is holding up the umbrella of ” software + services ” and heading towards the future of IT .

” The increasing complexity of the IT environment has made people’s demand for technology products continue to increase. The technology development trend in the next 10 years has shown that a single, modeled technology product or service will not meet the needs of social and economic development, and the global technology ecosystem will It will develop healthily in the direction of diversity, dynamics and service.” Donald Ferguson , a Microsoft academician and a member of the Microsoft CTO Office , believes that in the field of services, users can try before buying and pay on demand ; in the field of software, users have complete control — customization, one-time payment, and use as long as they want. If the user chooses the pure software or pure service approach, in fact, it means giving up the other advantages. ” S+S ” can solve this problem very well. The concept of ” S+S ” addresses the various needs of users, either to choose to obtain services, to continue to own software, or to have both.

” SOA is also very important for those software vendors that carry out SaaS “. The reason, said Dana Gardner , principal analyst at Interarbor Solutions , is that SOA can help it deliver application software more efficiently. Moreover, they gain a competitive advantage in terms of price compared to traditional packaged application software vendors.

Microsoft China Chief Technology Officer 李志霄Dr. said that software and services play complementary roles in ” S+S “, and 2008 will be an important year for Microsoft to step up its ” S+S ” strategy. According to Liu Qinzhong, director of SAP Business ByDesign , SAP will also change its face in 2008 , and expand new SaaS channels with SOA architecture products , so as to gain the dual benefits of SaaS and SOA .

cloud computing technology

As a new sales method of application software, SaaS has begun to flourish, but with the growth of SaaS software customers, basic resources such as network storage and bandwidth will gradually become the bottleneck of development. For many enterprises, their own computer equipment The performance may never be able to meet the demand, a simple solution is to purchase more and more advanced equipment, and then the equipment cost will increase sharply, and the profit will decrease. Is there a more cost-effective solution? ” Cloud computing ” The emergence may open a gap in the door for the solution of this problem.

Cloud computing is an emerging method of sharing infrastructure based on the Internet, usually some large server clusters, including computing servers, storage servers, broadband resources, and so on . It utilizes the transmission capacity of high-speed Internet to move data processing from personal computers or servers to server clusters on the Internet. These server clusters are managed by a large data processing center. The data center allocates computing resources according to the needs of customers. Connect huge pools of systems together to provide various IT services. to achieve the same effect as a supercomputer. Cloud computing centralizes all computing resources and manages them automatically by software without human involvement. This allows companies to focus more on their own business without worrying about cumbersome details, which is conducive to innovation.

Usually, SaaS providers focus more on software development, but have weak management capabilities for network resources, and often waste a lot of money to purchase infrastructure such as servers and bandwidth, but the user load provided is still limited, and cloud computing provides a A simple and efficient mechanism for managing network resources, which allocates computing tasks, rebalances workloads, dynamically allocates resources, etc., can help SaaS vendors provide unimaginably huge resources to a large number of users, SaaS vendors can no longer server and They waste their own resources on infrastructure such as bandwidth, and focus on specific software development and applications, so as to achieve a win-win situation for end users, SaaS, and cloud computing.

It can be seen that cloud computing has considerable potential in the enterprise software market, and it is also a great opportunity for SaaS suppliers. They can choose cloud computing platforms, use cloud computing infrastructure, and use their low prices for massive user base to provide more stable, fast and secure applications and services.

To quickly grasp the concept of cloud computing, we can use the concept of the cloud on the network architecture diagram to make an analogy. In the network architecture diagram, the Internet connection structure is usually hidden by the cloud, so there is no need to understand the complexity of the connection, but can communicate with simplified concepts; the concept of cloud computing is to reduce the complexity of the computing system. It is hidden, so that developers do not need to understand the system architecture that provides computing resources, as long as the computing data is thrown into the system, and finally the system will return the result.

Cloud technology can be regarded as a subset of grid technology. The purpose of both is to hide the complexity of the system so that users can use it without knowing how the system works.

Ajax technology

Ajax (Asynchronous javascript and XML) is a set of technologies for developing web applications. It combines programming technologies such as JavaScript , XML , DHTML and DOM , allowing developers to build web applications based on Ajax technology and break the use of page reloading. ‘s practice. It enables browsers to provide users with a more natural browsing experience. Modifications to client web pages are asynchronous and incremental whenever an update is required. In this way, AJAX greatly improves the speed of the user interface when submitting web page content. In AJAX based applications there is no need to wait long for the entire page to refresh. Changes are made only to those parts of the page that need to be updated, and updates are done locally and asynchronously if possible. While allowing users to enjoy SaaS application services, partial page refresh can be achieved, and using browser-based B/S software is as accustomed and smooth as using traditional C/S software. Applications like Ajax are increasingly being used in the software industry through SaaS.

WebService technology _ _ _

Web Service is a component integration technology based on HTTP, with SOAP as the lightweight transmission protocol, XML as the data encapsulation standard.

Web Service is mainly an interface proposed to enable the information between the original isolated sites to communicate and share with each other. Web Services use unified and open standards on the Internet, so Web Services can be used in any environment (Windows, Linux) that supports these standards. Its design goals are simplicity and extensibility, which facilitates interoperability between a large number of heterogeneous programs and platforms, so that existing applications can be accessed by a wide range of users.

Soap technology is the core of Web Service. It encapsulates data packets in XML standard format, and the encapsulated communication information is expressed in text and follows standard encapsulation rules. This means that any component model, development tool, programming language and application system can use this technology smoothly as long as data in XML and text formats are supported. Now all component models, development tools, programming languages, application systems and operating systems support XML and text formats, and of course, Soap can be fully supported.

In SaaS software, Web Service provides a mechanism for components to communicate with each other. Web Service technology will greatly improve the expansibility of the system and enable seamless integration of application systems of various platforms and development tools. At the same time, Soap, which is the core of Web Service technology, is an open standard protocol; it not only breaks through application barriers, but also can combine enterprise firewalls and internal information systems, while providing a secure and integrated application environment; allowing enterprises to encapsulate any custom information, Without the need to modify the source code of the application system, it provides strong system flexibility.

single sign-on technology

One of the basic requirements for the ease of use of modern web applications, at least within our system, is that the user can access all the subsystems he has access to with a single login.

Single Sing On (single sign-on) is to achieve automatic access to all authorized application software systems through a single login, thereby improving overall security, and there is no need to remember multiple login procedures, IDs or passwords.

In a WebService environment, single sign-on plays a very important role. In the WebService environment, various systems need to communicate with each other, but it is impractical to require each system to maintain each other’s access control lists. Users also need a better experience to use the different systems involved in a business process without the need for cumbersome multiple logins and authentications. In the single sign-on environment of WebService, there are also such systems, which have their own authentication and authorization implementations. Therefore, it is necessary to solve the problem of mapping the user’s credentials between different systems, and to ensure that once a user is deleted, the The user will not have access to all participating systems.

SAML is a standard for encoding authentication and authorization information in XML format. A Web Service can thus request and receive SAML assertions from a SAML-compliant authentication and authorization service, and authenticate and authorize a service requester accordingly. SAML can be used to transfer credentials between multiple systems and is therefore used in single sign-on scenarios.

Product Line Production in Software Factory

The economic and technical problems that hinder the transition from technology to production can be overcome by applying important new approaches that take a new approach to dealing with complexity and change. These new approaches also exist today and show clear potential for commercial products, although most of them are immature. Mainly in four aspects: system reuse, assembly development, model-driven development, process framework. Let’s consider them one by one.

  • system reuse

One of the most important new approaches in software development is to define families of software products whose members vary but share many common characteristics. Like Parnas, such a clan provides an environment in which problems common to members can be solved collectively. By identifying and distinguishing features that are more or less present in multiple products and those that vary, we can take a systematic approach to reuse. A software product family consists of components or entire products. For example, a family should contain different application investment management, including different user management frameworks, which are used by application investment management and user relationship management applications.

Software product families are developed by system integrators (SIs) to migrate applications from one user to another, or to improve existing applications to create new ones. They also develop software product families through independent software vendors, develop regional multi-applications like CRM, or multi-version applications through maintenance and improvement. They also develop software product families through IT organizations, improve existing applications, develop multi-relational applications, or multi-version applications through maintenance and improvement.

  • The practice of software production line

Software production lines develop families of software products that make the development of members of the family of software products faster, cheaper, and less risky by identifying common features and filling out forms for changes in specific areas. Rather than relying on temporary reuse, they systematically capture knowledge of how to develop family members, making it possible to reuse assets and utilize those assets during family member development. Developed as a family of products, requirements, architectures, frameworks, components, tests, and other assets can be reused.

Of course, there is a cost to developing a production line. In other words, the production line embodies the classic cost-benefit trade-off. The benefits on one side of the equation cannot be increased by producing many copies in a market that supports limited releases, but can be increased by producing many related, unique products, as described in many case studies [CN01]. Utilizing the software production line is the first step towards software industrialization. Making them cheaper to create and run is the second step. Figure 3-1 depicts the execution of major tasks, workpiece production, and utilization on a production line.

Figure 3-1 Software production line

Production line developers use development assets to develop software family members in the same way that platform developers create device drivers and operating systems for use by application developers. An important step in developing product assets is to develop one or more regional models that describe common problem characteristics provided by the production line and describe different tables. Together, these models define the scope of the production line and are used to define expected software family members. The requirements of software family members are derived from these models, providing a way to relate changes in requirements to changes in the architecture, implementation, execution, development process, the project environment, and other parts of the software life cycle. .

  • Model Driven Development

Raising the level of abstraction is an important process that reduces the scope of abstraction and therefore less developer control when implementing it. Loss of control is a corresponding increase in power. Most commercial application developers, for example, would rather use higher-level abstractions like these with C# and the .NET framework, rather than assembly languages and system calls. Higher levels of abstraction yield many benefits, including higher productivity, fewer defects, and easier maintenance and improvement.

Unfortunately, we see that raising the level of abstraction and tools is very expensive. If only we could find some way to make it faster, cheaper, easier, but we could provide a higher level of automation for small problem domains. This is the goal of Model Driven Development (MDD). Model-driven development utilizes model-driven capture of high-level information, often informally expressed, automatically implemented, or performed by compiling models, or by making it easier for human development to perform. This is important because information is currently not found in low-level artifacts, such as source code files, making it difficult to track, maintain, and continuously improve.

Some development activities, such as building, configuring, and debugging are currently partially or fully automated by leveraging information captured from source code files and other implementation artifacts. Using the information captured through the model, MDD can also provide more scalable automation activities, and more automated optimization tables, such as model debugging and automatic configuration tools. Here are some examples:

  • Routine tasks, such as producing one thing from another, can often be fully automated. For example, test harnesses can often be automatically produced from user interface mockups that make transitions between pages to simulate user activity.
  • Other tasks, such as resolving differences between artifacts, can be partially automated. For example, table columns and form fields may be full of problems to be solved by the user, and then automatically corrected at the user’s discretion.
  • Adapters, such as Web service wrappers, can be automatically generated from the difference between models and bridges in the implementation technology. Models can also be used for representation, protocol configuration, and other adaptive integration mechanisms.
  • Models can be used to define the configuration of artifacts, which are composed of hives to automate the configuration process. The configuration environment of the model can be used to constrain the design so that it can be implemented correctly.
  • Models can be used to describe the configuration of configuration components, capturing information about operational characteristics such as download balancing, failure recovery, resource allocation policies, automated management activities, data collection and reporting.
  • domain-specific language

For the sake of MDD, we are no longer interested in some end-of-line languages like 4GLs, nor in a high-level language to implement all aspects of development. The weaknesses of these strategies have been well documented. We are also no longer interested in models presented at conferences, and notes. Unfortunately, models are often used to document humans rather than computers. These create the impression that the model is not the first type of development artifact in source code. We are interested in using tools to work with models, and we plan to use them in the same source code way. In this way, the model of the document design cannot be expressed in language. Models must be accurate and unambiguous. At the same time, to increase the level of abstraction, modeling languages must focus on small areas rather than a general-purpose programming language. There are the following requirements:

  • The goals of language design must be clearly stated so that reviewers familiar with the domain can evaluate the language and decide whether it achieves its goals.
  • The language must enable people working in the field to capture business concepts. The language used to develop and assemble Web services must include concepts such as Web services, Web methods, protocols, and protocol-based connections. Likewise, a language used to visualize and edit C# source code must contain concepts (like C#) such as classes, members, fields, methods, properties, events, and delegates.
  • The language must familiarize its users with the names of its concepts. For example, a C# developer finds a model of a class with fields and methods more natural than a model of a class with properties and operations.
  • Language symbols, pictures or words, must be easy to use to solve problems. The things people do on a daily basis must be easy to express in concepts. For example, it must be easy to manipulate an inheritance with a visual and C# source code editing language.
  • The language must have a set of well-defined rules, called a grammar, that govern the expressions that make up concepts. This makes it possible to use tools to check whether expressions are correct, while helping users write concepts.
  • The semantics of each expression must be well defined so that users can create models that others understand, tools can generate legitimate implementations from models, and metadata captured from models can do what they expect when used to process tasks things like configuring the server.

A language that meets these criteria is called a domain-specific language (DSL) and should be modeled for those domain-specific concepts. DSLs are stricter than general modeling languages. Like a programming language, it also has text or picture symbols. SQ and HTML are two examples of DSLs, defining relational data and defining services for web pages, respectively.

Figure 3-2 , an example of two diagrams illustrating the DSL, is a screenshot of the Microsoft Visual Studio 2005 Team System. The DSL on the left describes components, like web services. It is used to automate component development and configuration. The DSL on the right describes the logical service types in the data center. It is used to design and implement data center configurations. Web services are developed by dragging service components onto logical servers. The difference between resource requirements and availability on logical servers is full of validation errors and diagrams.

Figure 3-2 Domain-specific languages

  • Incremental code generation

The key to efficient code generation is generating less conceptually distinct code. This allows tools to take advantage of platform features and produce centralized, efficient, platform-specific implementations. One way to add more code generation is to bring the model closer to the platform, as shown in Figure 3-3 . For example, a specialized programming language defined with a programming language type system can achieve more realistic modeling than a modeling language defined with a type system. This model now becomes a code view where the developer graphically manipulates the program structure like manipulating the class and method definitions. This tool embodies relationships and dependencies that are hard to see in code, saving time and effort in generating code for program structure. It enables programming styles like relational collection-based, or provides advanced features like reproduction and schema construction, application, and evaluation.

Figure 3-3 SaaS operator relationship group

Of course, by limiting the abstraction to the platforms available, this diminishes the role of modeling, or does not act like programming style. So how do we work at a higher level of abstraction? We use a more abstract model, framing or transforming the platform and the model closer, as shown in Figure 3-4 . Let’s look at these one by one.

Figure 3-4 Programming language modeling

Use high-level abstractions

  • We can use frameworks to implement high-level abstractions in modules, and use these modules to generate small pieces of code at framework extension points. Instead, models help users complete framework extensions by visualizing framework concepts and embodying them in an intuitive way. Building graphics applications, for example, is difficult when getting started with Microsoft’s operating systems. Subsequently, Microsoft Visual Basic made it easier to use graphics through form and control concepts.
  • Instead of schema or model languages, we can create lower-level DSL description languages. In order to lead this revolutionary change, we can also utilize more than two DSL description languages to span a wider span. Models described in the highest-level DSL language can be transformed into executable software through refinement, as shown in Figure 3-4 . shown. This explains how compilers work, how high-level languages like c# and java are converted into intermediate code like bytes or IL, which is JIT-compiled into the target platform’s binary format.
  • composition mechanism

Of course, the handwritten code must usually be combined with the framework code to produce a complete executable program. A few different mechanisms can be used to do these things. The important difference between them is to help set the time.

Figure 3-5 Composition of Design Time

Two advantages of runtime binding are that it combines handwritten code with framework code through interfaces, allowing dynamic configuration through object substitution. At the same time, delegating classes allow handwritten code to be protected through regeneration. A minor disadvantage is that the runtime is often method calls. In the component programming model, some runtime binding-based mechanisms are very popular, as shown in Figure 3-6 . They have all been very successful in large-scale commercial products.

  • Before compilation, design time in the same artifact is primarily both handwritten code time and framework code time, as shown in Figure 3-5 . This includes constraints on the editing experience (eg, editors with read-only areas) to avoid users modifying framework code. In other tools, users add handwritten code in a special window. Runtime binding merges handwritten code and framework code with asynchronous callbacks. An agent-based runtime binding mechanism is described by a design model, such as the following from Gamma, et.al.: events (Observer), adapters (Adapter), policy objects (Strategy), factories (Abstract Factory), orchestration (Mediator), wrappers(Decorator), proxies(Proxy), commands (Command)?and?filters(Chain of Responsibility) [GHJV95]. Two advantages of runtime binding are that the interface enables handwritten code to be combined with framework code Up, allowing dynamic configuration through object replacement. At the same time, delegating classes allow handwritten code to be protected through regeneration. A minor disadvantage is that the runtime is often method calls. In the component programming model, some runtime binding-based mechanisms are very popular, as shown in Figure 3-6 . They have all been very successful in large-scale commercial products.
  • Handwritten SUB class. The user provides handwritten code in the SUB class in the framework. An abstract method in the framework code defines the display coverage point. For example, the user writes a subset of framework entities, the domain references the handwritten code through the template method pattern, and highlights function calls.
  • Framework SUB class. The user provides handwritten code in the parent class of the framework code. An abstract method of handwritten code is overridden in framework code. For example, a framework entity field introduces parent class function calls for handwritten code, and highlights function calls.
  • Handwritten delegate class. The user provides additional writing code in the delegate class. For example, a framework entity calls a handwritten entity where it is specified, before or after setting the property value. In fact, it is a proxy server mode.
  • Framework delegate class. Users supplement handwritten code to obtain framework services. For example, a handwritten code entity calls a framework entity to set or get property values.

Figure 3-6 Runtime composition

  • Binding merges handwritten code and framework code during compilation, as shown in Figure 3-9 . It is a good way to take advantage of partial specs and compile-time merging during compilation. The Visual Basic and C# languages in Visual Studio 2005 are built during compile time.

Figure 3-7 SaaS operator relationship group

  • Assembly development

Important innovations in the field of platform-independent protocols are self-description, variable encapsulation, assembly through processes, and architecture-driven development.

  • Platform independent protocol

Web services technology succeeded, but earlier component assembly techniques to separate out specific and assembled components from implementation techniques failed. Since XML is a technology for managing information, not a technology for building components, Web services use encapsulation to map Web method calls to native method calls, based on the following component implementation technologies. While CORBA attempts to use a similar strategy, its complexity requires significant investment from platform vendors, which limits its scope of use. Simple XML-based protocols significantly reduce implementation difficulties, ensuring their universality. By encoding remote method invocation requests like XML, they avoid interoperability problems caused by platform-specific remote method invocation encoding and parameter aggregation. At the same time, they have designed the platform’s interoperability from the start by gaining broad industry standard acceptance.

  • self description

By improving component packaging to make inferences, dependencies, behaviors, resource consumption, performance, and proofs obvious, self-describing reduces schema mismatches. It provides metadata that can be used to automate component discovery, selection, licensing, acquisition, installation, tuning, assembly, testing, configuration, deployment, control, and management.

The most important form of self-describing is used to describe component inferences, dependencies, and behaviors, so developers can deduce interactions between components and tools can verify the assembly. The most widely used specification sheets in object orientation are class and interface declarations. They define the behavior provided by the class, but only account for important inferences and dependencies by naming other classes and interfaces in method signatures. A contract is a rich specification. A contract manages the interaction between components. It doesn’t know when to call a component. A contract describes the sequence of interactions, and responses to protocol illegal and other unpredictable conditions.

Of course, contracts are useless unless they are forced. There are two ways to enforce the contract.

  • Assemble components without mismatching contracts
  • Use the information provided by the contract to provide adapters that enable direct interaction between components, or coordinate the interaction between them.

Garlan recommends using standard adaptation techniques recipes and tools that provide encapsulation and data conversion [Gar96]. One of the most promising adaptation strategies is to release partial components that can be accomplished during assembly by encapsulating aspects that provide the code required for assembly. This strategy, called variable encapsulation, is described below.

Another important aspect about self-description is proof. If a component can prove that it has only specified dependencies, consumes specified resources, has specific functional characteristics under certain conditions, or has some public weaknesses, then it can infer the functional characteristics of the software assembled from these components. and operating characteristics. This has been studied at Carnegie Mellon University’s School of Software Engineering, and it’s called Assured Predictable Assembly of Components (PACC).

  • variable encapsulation

We have seen that static encapsulation reduces this possibility – a component can be used in a particular assembly by statically binding its functional aspects or intrinsic aspects that have no functional or contextual dependencies. Variable encapsulation reduces mismatches between architectures by publishing partially encapsulated components that enable adaptation to new contexts by leveraging their functional aspects to select and codify appropriate non-functional aspects, as shown in Figure 3-8 Show. The form of a component in a particular assembly can be determined by the context of its location. Flexibility can be improved by making component boundaries more resilient and reducing mismatches between architectures. By removing non-functional assumptions, functional parts can be reworked on component boundaries. Effective adjustments can be pre-identified, and in some cases even automated by tools.

Figure 3-8 Variable Encapsulation

Variable encapsulation is an Override of Aspect Oriented Programming (AOP). AOP is a method in which different aspects of a system are separated and then combined [KLM97]. Variable encapsulation differs from AOP in three ways.

  • Variable encapsulation codifies the encapsulation aspect, whereas AOP, as a common practice, codifies non-encapsulated lines of code. On the non-packaged side, the same problems arise when assembling a poorly assembled component package, called architecture mismatch and unpredictability. Indeed, aspect-based sourcing is more prone to these problems than component assembly, since components have at least descriptive behavior and some wrappers that prevent no dependencies. AOP’s lack of packaging makes it difficult for developers to infer aspect compatibility and the functional characteristics of the code, or the implementation result characteristics, making it almost impossible to check the aspect code with tools.
  • In terms of AOP programming during component development, variable encapsulation is programmed later than them, such as during component assembly or configuration. This is important because the context a component may be placed into is not known until the component is published. In fact, in order to support assembly development, as described in the article, the third part must be able to predictably assemble and deploy dependency-free development components. This requires a formal way to separate the aspect, the encapsulation aspect, the specification aspect and the packaging aspect. Variable encapsulation can also be progressive, which can happen in different stages. For example, we can bind some aspects during assembly, some during development, and some during runtime.
  • Variable encapsulation is architecture driven, whereas AOP is not. These aspects separated from the functional core must be explicitly defined through interfaces, abstract classes, WSDL files, or other forms of contracts.
  • Process management assembly

If there are sufficient contract mechanisms, services can manage the order of information exchanged between them through a process management engine, such as Microsoft BizTalk Server, as shown in Figure 3-9 . Process management assembly makes assembly development easier because there are far fewer dependencies between services than binary components. Unlike classes, they do not necessarily reside in the same implementation. Unlike components, which require platform-specific protocols, they can be assembled across platform boundaries. Two services can interact with each other if the contracts between them are compatible. They can be developed and deployed separately, and then assembled through process management. They can even reside in different administrative and organizational areas if appropriate interception heavy-duty services are available. In other words, process management assembly eliminates design, compilation, and deployment time dependencies between different components.

Figure 3-9 Process management assembly

Process management assembly is essentially an arbitration, as described by Gamma’s arbitration pattern. An interaction flow between quorum management components. A quorum has powerful properties. One of these functions is to filter or translate information when components interact. Another function is to control interactions, maintaining state through multiple calls if necessary. This allows the quorum to infer interactions and change them if necessary through conditional logic. Quorum can also perform useful functions such as logging, enforcing security policies, and linking between different technologies or different versions of the same technology. An arbitration can also be a functional part of an assembly, enforce business rules or perform a business function, such as concluding a business transaction.

  • Architecture Driven Development

When it is better to prevent assembling mismatched components than to build illegal assemblies, then there is no need to improve the availability of well-matched components. That’s the goal of the architecture. According to Shaw and Garlan, a software architecture describes the assembly of components, their interactions and acceptable patterns of composition, reducing the risk of well-designed architecture mismatches and constraining design decisions.

Of course, developing software architecture is challenging. This makes it take many architects many years to become proficient in a limited architectural style or application domain. Assembly development cannot be achieved on an industrial scale without significant advances in architectural practice and more trust in software architecture.

These are the goals of Architecture-Driven Software Development (ADD), including:

  • A standard for describing, paraphrasing, and using schemas.
  • A method for predicting the utility of design decisions.
  • A pattern or architectural style, which is used to organize design expertise and help designers develop component-split representation patterns.

An architectural style is a rough model that provides a set of family systems for abstraction frameworks. It defines a set of rules that specify the different kinds of components that can be used to assemble a system, and the relationships of the different kinds of components can be used in assembly, in the constraints of assembly, and in the assumption of assembly. For example, a web service component style can be used to specify the port provided by the component. These components are defined by the Web service, and the connection is established through the connection port, which can only be connected if the two ports are compatible, and SOAP is used for communication through HTTP. Other architectural styles include: data flow, layered and MVC style. An architectural style promotes partitioning and improves design reuse by providing solutions to frequently occurring problems, and also promotes the following.

  • Reuse is achieved by identifying common architectural elements that are shared by the system based on this style.
  • Indicates clarity by defining a standard framework.
  • Improve interoperability by defining standard communication mechanisms.
  • Improve visualization by defining standard notation.
  • Improve tool development by defining enforced constraints.
  • Analyze by identifying salient features of the system based on this style.

An architecture description is a document that defines the software architecture. IEEE Standard 1471, which is recommended for describing intensive software architectures, provides guidelines for describing architectures [IEEE1471]. According to these guidelines, a system has one or more shareholders. A shareholder has special concerns and interests regarding certain aspects of the system. To be useful to shareholders, an architectural description must require a form and structure that is understandable to shareholders. ADS is a template used to describe the architecture of a system family. A formal scenario defines a view that can describe a part of a software product; it also provides a pattern for making the description, defining the scope, target and audience, the convention language, and the methods used to develop it .

These prominent elements used to detail a scene include:

  • An identifier and other introductory information (eg, author, date, references, etc.).
  • Stakes with the scene.
  • The conventions, languages, and methods used to generate views based on the scene.
  • Confirm the consistency of the view and complete the test.

A view describes a software product from a given scenario. A view is semantically close, meaning that it describes a software product from that context. A view contains one or more artifacts, each developed according to the scenario requirements. A view is an instance of a scene, and the view must be consistent with the scene for better shaping. A view follows a web design scenario, for example, it should describe the web layout of a particular software product, and it should describe the web layout using notation defined by the scenario. These prominent elements used to detail a view include:

  • An identifier and other introductory information (eg, author, date, references, etc.).
  • The scene identifier that the view follows.
  • A description of a software product constructed using custom, language, and scenario-defined methods.

To understand the difference between a view and its context, consider a logical database design for a business application. logical data

A library design is a view of an application, or more precisely, a view of the constituent components. The application aspect and the language used to describe it are specified in the logical database design scenario. Many different business applications can be specified, by using the same scenario, to generate different views, each view describing a logical database for some business application. These views can describe the same aspects, in the same language, but they will have different content, so each content describes a different application. An assembly view can decompose views of individual components from the same scene.

According to IEEE 1471, an architectural description must identify the scenarios used and the rationale for using these scenarios. An ADS as a specific target can be defined by enumerating the set of scenarios in which it is used. For example, an ADS for a consumer-to-business web application may require a scenario for the layout of the web pages and a scenario for the layout of the business data. Every view in the schema description must follow a scenario defined by ADS.

  • process framework

The key to process maturity is maintaining flexibility as the complexity increases with project size, geographic distribution, or time. Experience tells us that few structures increase flexibility by reducing the amount of work required. This principle can be applied across a family of software products by using a process framework to manage complex products without reducing flexibility.

Some difficulties with formal processes are that they are too abstract. The guidance they provide is obvious to experienced programmers, but less specific and sufficient for beginners. In order to add value in use, it must reduce the details of the current project, because each project is unique in many ways, and it is impossible for us to produce a process that satisfies all projects. We know how to solve such problems and we can customize and tailor a formal process for a particular product family. If there is no professional supplier, the above things cannot be successful in the market. Some vendors usually add something useful from other processes such as XP in order to customize the process for a particular user. Others, especially system integrators and ISVs, tailor the process to suit a particular product or consultative practice. Either way, the key to using any process efficiently is to make it highly specialized for a given project so that it contains only immediately available resources. The changes resulting from this customization are very complex and produce results that are rarely similar to the original process.

A highly centralized process includes detailed project information such as tool configuration, network share paths, developers working with instructions, API documentation, names of key contacts for key processes like CM, bug tracking and handling, and check-in Group strategy, programming style, and peer checks, among other details about projects and project teams. Along with other forms of system reuse, this customization is only useful if we can use it more than once. Also, reusing a highly concentrated process resource increases its flexibility by eliminating work, as do other reused resources. As Jacobson always said, the fastest way to build something is to reuse something that already exists, especially reusable resources that can be customized and extended. Many things can be reused systematically, as can the development process.

A process framework is decomposed into smaller processes, which are attached with ADS scenarios. Each small procedure describes the need to generate a view. It can enumerate the key decision points, identify the transformation links of each decision point, describe the necessary and optional activities, and describe the resources required for each activity and the products produced. Each artifact has some constraints before it is processed, and some post-conditions, the invariant environment required for the artifact to stabilize. For example, we need to get the loop condition before the loop starts and the end condition when it exits. We need all code to be built and tested correctly. We call this architecture a process framework because it defines the space in which processes may be incorporated, depending on the needs and environment of a given project, without necessarily describing a process for all projects. When a process framework is defined, small processes can be combined into any workflow required by the project, including top-down, bottom-up, inside-out, test coding and coding testing, any combination or mix of flows .

These workflows can be driven by resources, allowing optimization via PERT and CPM to start workflows when resources are available. Many kinds of resources can drive planning, including requirements and source code, developers and program managers, configuration management products or defect tracking systems, like opening a port on a server or allocating memory to a device. This is called constraint-based planning. Constraint-based planning leverages a small number of architectural requirements, balancing the need for flexibility. Constraint-based planning provides guidance, adding constraints on development artifacts rather than prescribing a process. The generation of flexibility can be obtained by dynamically generating a workflow under constraints, adapting to a large number of environmental variables, while summarizing learning experience and reducing the cost and time of knowledge rediscovery.

A process framework is not necessarily too large or too thin, and may contain more or less of the required details. This provides a way to measure the size of the process, depending on the environment. For example, a small and flexible group can use a small framework that provides only some of the main key practices, such as XP. A large organization can add many details of the build process, inspection process, test process or component sharing rules.

System Architecture Design

The system architecture in software development determines the stability, robustness, scalability, compatibility and availability of a system, and it is the soul of the system. Architecture is at the heart of the architect’s concern. A good architecture is the beginning of the success of the system, otherwise, no matter how good the code and design are, it will not help.

Introduction to the main development frameworks of .net

  • Castle

Castle is an open source project for the .NET platform. From the data access framework ORM to the IOC container, to the MVC framework and AOP of the WEB layer, it basically includes everything in the entire development process to quickly build enterprise-level applications for us The program provides a good service. The key technologies are ActiveRecord, Facilities, MonoRail and so on .

Advantages: It embodies the idea of ORM, IOC, ActiveRecorder, and MVC framework.

Disadvantage: The division of the framework level is not very clear.

  • PetShop

PetShop is used by Microsoft to demonstrate the capabilities of .Net enterprise system development. PetShop4.0, this example is released by Microsoft for SQL Server 2005 and Visual Studio 2005. Some new technologies are used in it. Cached data is synchronized with database updates, new Web controls, and master applications, asynchronous communication, and message queues. These are very useful techniques. The abstract factory pattern is widely used in PetShop. Due to the use of Master Pages, Membership, and Profile, the coding amount of the presentation layer is reduced by 25%, and the coding amount of the data layer is reduced by 36%.

Figure 3-10 Architecture of PetShop4.0

In the data access layer (DAL) of PetShop4.0, DAL Interface is used to abstract the data access logic, and DAL Factory is used as the factory module of the data access layer object. For the DAL Interface, there are specific implementations of SQL Server DAL that supports MS-SQL and Oracle DAL that supports Oracle. The Model module contains data entity objects. It can be seen that in the data access layer, the idea of “interface-oriented programming” is completely adopted. The abstracted IDAL module is separated from the dependency with the specific database, so that the entire data access layer is conducive to database migration. The DALFactory module specifically manages the creation of DAL objects for easy access to the business logic layer. Both the SQLServerDAL and OracleDAL modules implement the interface of the IDAL module, and the logic contained in them is the Select, Insert, Update and Delete operations on the database. Because the database types are different, the operations on the database are also different, and the code will be different accordingly.

In addition, the abstracted IDAL module, in addition to releasing downward dependencies, also has only weak dependencies for the business logic layer above it.

Advantages: It embodies the factory mode ORM, IOC idea, and .Net enterprise -level development.

Insufficient: no ORM idea.

  • N hibernate _

Hibernate is the most widely used open source object-relational mapping framework. It encapsulates Java’s JDBC (similar to ADO.Net) with very lightweight objects, allowing programmers to use object programming thinking as they wish. To manipulate the database, it has become quite popular in the domestic Java development circle. NHibernate, like NUnit and NAnt, is based on .Net’s Hibernate implementation. It mainly embodies the idea of ORM , solves the problem of persistence layer in layered development, and is very important in N-layer development .

Advantages: reflects ORM, persistence layer .

Disadvantages: The configuration is complex, and it relies too much on XML files.

Summary of techniques used:

OR Mapping idea, layered architecture idea, Castle-ActiveRecorder, Atlas, reflection, design pattern (singleton pattern, simple factory pattern, strategy pattern), XML, IOC, framework.

Introduction to the current main development framework of J2ee

  • Struts frame

The Struts framework is an open source product for developing Web applications based on the Model-View-Controller (MVC) design paradigm. It uses and extends the Java Servlet API and was originally created by Craig McClanahan. In May 2000, it was donated to the Apache Foundation. The Struts framework presents a powerful library of custom tags, tiling, form validation and I18N (internationalization). In addition, Struts supports many description layers, including JSP , XML/XSLT allows Java programmers to use object programming thinking to manipulate databases, JavaServerFaces (JSF) and Velocity ; also supports some model layers, including JavaBeans and EJB.

The following is the core content of Struts :

JSP(TagLib)——>ActionForm——>Action——>Event——>EJBAction——>EJB——>DAO——>Database

JSP (TagLib) (forward) <——Action<——EventResponse<——

Advantages: Based on MVC pattern, well structured, based on JSP .

Disadvantages: The scalability is not very good, it is not suitable for large-scale projects with complex logic, and the framework hierarchy is not very clear.

  • Spring Framework

The Spring Framework is a layered Java/J2EE application framework based on code designed and distributed by Expert One-on-One J2EE. The Spring Framework provides a simple development technique for automating a large number of property files and helper classes in a project.

Spring is an open source framework created by Rod Johnson and described in his book “J2EE Design and Development Programming Guide”. It was created to address the complexities of enterprise application development. Spring makes it possible to use basic JavaBeans to do things that were previously only possible with EJBs. However, Spring’s uses are not limited to server-side development. Any Java application can benefit from Spring in terms of simplicity, testability, and loose coupling.

The main features included in the Spring Framework are :

1 Powerful JavaBeans-based configuration management, using the Inversion-of-Control (IoC) principle.
2 A core bean factory that can be used in any environment, from applets to J2EE containers.
3 The general abstraction layer is suitable for database transaction management , allows pluggable transaction managers, and can easily demarcate the boundaries of each transaction without dealing with low-level problems.
4 A meaningful JDBC abstraction layer for exception handling.
5 Integrated with Hibernate, DAO implementation support and transaction strategy.

Advantages: It embodies the ideas of J2EE , container, lightweight , inversion of control, and aspect-oriented .

Disadvantages: The structure is complex and difficult to understand.

  • Hibernate framework

Hibernate is an open-source Object Relational Mapping (ORM) framework that provides a very lightweight object encapsulation for JDBC. It provides an easy-to-use framework to map an object-oriented domain model to a traditional relational The database allows Java programmers to use object programming thinking to manipulate the database at will . Not only is it responsible for mapping from Java classes to database tables (and SQL data types from Java data types) , but it also provides data query and retrieval capabilities, and can greatly reduce development time spent on manual data processing in SQL and JDBC. The most revolutionary thing is that Hibernate can replace CMP in the J2EE architecture applying EJB to complete the heavy task of data persistence.

The goal of Hibernate is to ease the developer’s programming tasks associated with the persistence of large amounts of common data. Hibernate is also able to adapt to the development process, whether it is a new design or from an off-the-shelf database. Hibernate can automatically generate SQL, freeing developers from the tedious task of manually processing result sets and converting objects, and enabling applications to be ported to all SQL databases. It also provides transparent persistence. The only requirement for persistence classes is to implement a parameterless constructor.

Advantages: It is mainly used in the EJB layer , which is highly configurable and flexible, and simplifies database operations.

Disadvantage: Difficult to configure.

Common software architecture

  • Three-tier architecture

In software architecture design, the layered structure is the most common and the most important one. The hierarchical structure is generally divided into three layers, from bottom to top: data access layer, business logic layer (or domain layer), and presentation layer, as shown in the figure:

Figure 3-11 Three-tier architecture

Data access layer: sometimes called the persistence layer, its function is mainly responsible for database access. In short, it implements the operations of Select, Insert, Update, and Delete on the data table. If you want to add elements of ORM, it will include the mapping between objects and data tables, and the persistence of object entities.

Business logic layer ( BusinessRules ): It is the core of the whole system, which is related to the business (domain) of this system. Taking the STS system as an example, the relevant design of the business logic layer is related to the logic of sales tracking . Structurally it encapsulates the related operations of the data access layer. This layer is mainly composed of classes that implement specific business logic.

Presentation layer (WebUI) : It is the UI part of the system and is responsible for the interaction between users and the entire system. In this layer, ideally, the business logic of the system should not be included. Logic code in the presentation layer, only related to interface elements. In the current project , it is designed using ASP.NET , so it contains many Web controls and related logic .

  • Five-tier architecture

SaaS software architecture can also be divided into five layers , from top to bottom : user interface layer (presentation layer), business logic layer , general layer, application framework layer, remote access (WebService) layer, data access layer, as shown in the figure shown:

Figure 3-12 Microsoft-based .NET architecture design

User Interface Layer (UI)

The user interface layer is the interface that the user operates directly. This layer consists of interface appearance, form controls, frame and other parts. The user interface layer is responsible for the user’s interaction with the entire system. In this layer, ideally, the business logic of the system should not be included. Logic code in the presentation layer, only related to interface elements. In the current project , it is designed using ASP.NET , so it contains many Web controls and related logic .

    • interface appearance includes skip (skin), Images (pictures), css (style sheets)
    • Form controls mainly include common forms and user-defined controls.
    • The framework mainly includes Master Page and Frame Page.
    • Others mainly include JavaScript files, Dll files, Report reports, Schema database creation, and Model development templates.

Business logic layer ( BusinessRules )

It is the core of the whole system, and it is related to the business (field) of this system. Taking the STS system as an example, the relevant design of the business logic layer is related to the logic of sales tracking . Structurally it encapsulates the related operations of the data access layer. This layer is mainly composed of classes that implement specific business logic.

    • BLFactory business logic factory
    • IBL business logic interface
    • BusinessRules business logic implementation

General layer

The common layer runs through the presentation layer and business logic layer of the entire project. It mainly stores the more general constant definitions and general services (Service) in the project. The Service here refers to the general methods in the business logic of the current project. We write them in the corresponding static classes. Provided as a service.

CommonLayer : Stores common constants and methods .

data access layer

This layer structure is the most complex and mainly consists of the following layers: data access factory layer ( DALFactory ), data access interface layer (IDAL), custom query layer ( PersistenceFacade ), temporary layer ( DataAccessLayer ), data persistence layer ( PersistenceLayer ) .

The following is from bottom to top:

    • PersistenceLayer layer, which is the bottom layer of the framework design (except the application framework layer). It is mainly responsible for objectifying the physical database with ORM ideas. Simply put, it is to map database tables to entity classes, and map corresponding fields to class attributes. In this way, the physical database is completely transparent to developers, and we completely get rid of the physical database by applying the ORM idea. And independent of the specific implementation of the database.
    • Specifically, we apply the implementation of the lightweight data access component ActiveRecorder under the well-known open source project Castle.
    • PersistenceFacade layer and IDAL, where all query methods used in the project are defined. Corresponds to the data entity defined by the PersistenceLayer layer . In the query class defined by these words, any combination of the three query methods provided by ActiveRecorder (the simple interface provided by ActiveerRecorderBase, the simple query SimpleQuery, and the custom query CustomerQuery) can be applied. And each class here must implement the relevant interface defined by the IDAL interface layer.
    • The DALFactory layer, as the factory for data access, invokes the relevant operations in the data access components composed of IDAL and PersistenceFacade through the reflection mechanism of .Net.
    • DataAccessLayer temporary layer. First declare that this layer is completely unnecessary. Because we can not write any Sql statement in the project. All Sql is replaced with Hql. The purpose of designing this layer is to allow the technical transition of the people in the project team. This layer can operate the database through Sql (not recommended). This layer will no longer be provided after the architecture is stable.

Application framework layer ( Framework )

The purpose of this layer is technical precipitation. Move the common things between projects into the application framework layer to achieve the purpose of code reuse. This layer can be black boxed later. Common components can be included.

    • Framework : Accumulate some methods and controls that can be abstracted
    • MSMQMessag : Implementation of message processing queue
    • Pager: general page turning class
    • Report: general report class
    • Controls : Control handling class
    • DataFormat: Data format conversion class
    • WebUI: page processing class
    • Validate : data validation
    • Object: conversion and access between objects

The benefits of a layered architecture

1. Developers can only focus on one of the layers in the entire structure;

2. It is easy to replace the implementation of the original level with a new implementation;

3. It can reduce the dependency between layers;

4. Conducive to standardization;

5. It is beneficial to the multiplexing of logic at each layer.

In a nutshell, layered design can achieve the following goals: decentralized attention, loose coupling, logic reuse, and standard definition.

A good hierarchical structure can make the division of labor among developers more clear. Once the interfaces between the layers are defined, the developers responsible for the different logic designs can spread their attention and work hand in hand. For example, UI personnel only need to consider the user interface experience and operation, domain designers can only focus on the design of business logic, and database designers do not have to worry about tedious user interaction. The task of each developer is confirmed, and the development progress can be rapidly improved.

The benefits of loose coupling are obvious. If a system is not layered, then their respective logics are tightly intertwined, interdependent on each other, and no one can be replaced. Once a change occurs, it will affect the whole body, and the impact on the project is extremely serious. Reducing the dependencies between layers can not only ensure future scalability, but also has obvious advantages in reusability. Once a unified interface is defined for each functional module, it can be called by each module without repeated development for the same function.

For a good hierarchical structure design, standards are also essential. Only with a certain degree of standardization can the system be scalable and replaceable. The communication between the layers must also ensure the standardization of the interface.

“No gold is bare, no one is perfect”, and the layered structure inevitably has some defects:

1. Reduce the performance of the system. This goes without saying. If the hierarchical structure is not adopted, many businesses can directly access the database to obtain the corresponding data, but now it must be done through the middle layer.

2. Sometimes leads to cascading modifications. This modification is especially reflected in the top-down orientation. If a function needs to be added in the presentation layer, in order to ensure that its design conforms to the hierarchical structure, it may be necessary to add corresponding codes in the corresponding business logic layer and data access layer.

Software Architecture View

Philippe Kruchten writes in his book “Introduction to the Rational Unified Process”:

An architectural view is a simplified description of a system as seen from a certain perspective or point, covering a particular aspect of the system and omitting entities that are not related to this aspect.

That is to say, there are too many contents and decisions to be covered by the architecture, which is beyond the ability of the human brain to “make it overnight”. Therefore, the “divide and conquer” approach is adopted to design from different perspectives; Archiving provides convenience.

Figure 3-13 The 4+1 view method proposed by Philippe Kruchten

Different architectural views of the approach carry different architectural design decisions, supporting different goals and uses:

  • Logical View: When an object-oriented design approach is adopted, the logical view is the object model.
  • Development view: Describes the static organization of software in a development environment.
  • Processing Views: Describes the design of the concurrency and synchronization aspects of the system.
  • Physical View: Describes how software maps to hardware, reflecting the design of the system in terms of distribution.

Figure 3-14 Architectural design for different requirements using the 4+1 view method

logical view. The logical view focuses on functions, including not only user-visible functions, but also “auxiliary function modules” that must be provided to implement user functions; they may be logical layers, function modules, etc.

Development view. The development view focuses on packages, including not only source programs to be written, but also third-party SDKs and ready-made frameworks, class libraries that can be used directly, and system software or middleware on which the developed system will run. There may be a certain mapping relationship between the development view and the logical view: for example, the logical layer is generally mapped to multiple packages.

Process the view. The processing view focuses on runtime concepts such as processes, threads, objects, and related issues of concurrency, synchronization, and communication. The relationship between the processing view and the development view: The development view generally focuses on the static dependencies of the package at compile time , and these programs will be expressed as objects, threads, and processes after running. The processing view is more concerned about these runtime units. interaction problems.

physical view. The physical view focuses on how “the target program and its dependent runtime libraries and system software” are ultimately installed or deployed to physical machines, and how machines and networks are deployed to meet the reliability and scalability requirements of the software system. The relationship between the physical view and the processing view: The processing view pays special attention to the dynamic execution of the target program, while the physical view pays attention to the static location of the target program; the physical view is an architectural view that comprehensively considers the interaction between the software system and the entire IT system.

product development model

Product development mode is the focus of corporate strategy. The product development route determines a series of management methods and team building issues. It is also the organization strategy and management idea of the enterprise. The product development model runs through the entire product life cycle, from market research, project establishment, demand analysis, design, detailed design, development, testing, release, maintenance and other traditional software engineering ideas to the current popular IPD, market-oriented. Business models are all changing the traditional R&D model. The new idea with service experience as the core is the essence of the SaaS model. We do not develop for product research and development, we must develop for market value development.

Several Mainstream Product Development Models

  • Functional development with project management

This is the product development model usually adopted by enterprises. The general manager or the marketing department determines the new product idea and decides whether to initiate a project. The R&D/technical department is responsible for design, development, testing, and forming product prototypes or service plans, which are then transferred to the production department for batch manufacturing. The department is responsible for sales, and the customer service department provides after-sales service. Each functional department is only responsible for a certain stage of new product development, and formulates the business operation process of the department. Although there are project managers or formal project managers and product managers, they are not responsible for the final market success of the product.

Under such a management system, the focus is on the vertical management of various departments, and the lack of management of the horizontal relationship of profit model, product concept, research, production, supply and sales, which makes the product development process lack of attention, and few people comprehensively Looking at the market value, product strategy, development method and marketing mix of the product, they often make new product development decisions without seeing the whole picture. The heads of functional departments only care about how to smoothly deliver the product to the next link. Often complaining about the quality of work in the previous link, the company’s top management has to do a lot of coordination, communication and decision-making. When an enterprise develops to a certain scale, especially when multiple products are being developed at the same time, the general manager will often focus on one thing and another and be busy “fighting fire”, making decisions on the details of product design and internal management.

Figure 3-15 Functional development with project management

  • PACE: Product and Cycle Optimization Approach

PACE (Product And Cycle-time Excellence, product and cycle optimization method) was proposed by the American management consulting company PRTM in 1986, and used by PRTM to guide the improvement of the product development process of enterprises, it provides a complete general framework, elements and standard terminology.

1. The basic idea of PACE

(1) Product development is driven by the decision-making process, a process that can be managed and improved, not just by genius and luck.

(2) The product development process needs to be defined and implemented to ensure that all relevant personnel of the enterprise have a common understanding and know how to coordinate and cooperate.

(3) Product development is a structured process with four levels and a three-level schedule, which needs to be incorporated into a logical process framework. It is believed that problems must be solved through comprehensive methods, and isolated and scattered improvement methods are not desirable.

(4) Each stage of the evolution of the four processes needs to be done step by step. It is meaningless to prematurely introduce an element of the next stage into the current stage, just like adding a turbocharger to a bicycle. Contributes to the increase in speed, but increases the weight.

(5) Product development needs to be managed in a public decision-making process, and the management focus of top management is the key to decision-making and balancing the development process.

(6) The product development project team and the senior management need to establish a new organizational model (core group method). The product development team should have an authorized product manager and several cross-functional members, and the senior management should be turned into a product approval/management committee.

(7) Emphasize that design methods and automated development tools must have supporting infrastructure in order to be effective, and the improvement of product development processes cannot rely on design methods and automated development tools known as “silver bullets”.

2. Representative works of PACE

In the book “PACE-Product And Cycle-time Excellence” written by Michael E. McGolas, the founder of PRTM, the theory and knowledge system of PACE are comprehensively and systematically introduced.

Michael E. McGrath, one of the founders of PACE, also believes that product development is the main battlefield of business in the 21st century, and the future will be the “era of R&D productivity”, that is, new products can be developed in batches, and the company will pay more attention to new product development resource management, project management, technology management and product strategy.

3. The main core content of PACE

PACE believes that product development should focus on seven core elements, including stage review and decision-making, establishing a cross-functional core team, adopting a structured development process, using a variety of development tools and technologies, in addition to establishing a product strategy, conducting technology management, and aligning Pipeline management for the input of multiple products and resources.

  • IPD: Integrated Product Development

IPD (Integrated Product Development, integrated product development), its idea comes from PACE, on this basis, Motorola, DuPont, Boeing and other companies continue to improve and perfect in practice, created by IBM in learning, practice, and successfully Assisted Huawei in implementing the system. The IPD integrated product development process can be summarized as “a structured process, two types of cross-departmental teams, three system framework sets, four major decision review points, five core concepts, six important stages, seven related elements and eight Positioning tools”, the core idea of which is process reengineering and product reengineering.

Figure 3-16 IPD development mode

  • SGS: Gate Management System

SGS (Stage-Gate? System) gate management system, founded by Robert G. Cooper in the 1980s, is used in companies in the United States, Europe and Japan to guide new product development. (Cooper: Long-term commitment to product innovation (development) management research, especially empirical research, he believes that through extensive investigation and statistical analysis, product innovation (development) laws can be found, many of its empirical research reports have become theoretical and business circles. Important basis for new product success or failure analysis)

1. The basic idea of SGS:

(1) Make the project right – listen to the opinions of consumers, do the necessary preparatory work, and use a cross-functional work team

(2) Do the right project – carry out strict project screening and portfolio management

2. Representative works of SGS:

In his book “Winning at New Products: Accelerating the Process from Idea to Launch”, Professor Cooper provides a detailed introduction to various aspects of gate management systems and provides extensive research findings.

3. The main core content of SGS

The new product development process—gateway management process, its model is as follows:

Figure 3-17 SGS development mode

SGS pays great attention to effective entry decision-making and combination management, and makes life/kill decisions at each stage of product development to prevent waste of more resources from worthless products. In addition, multiple products need to be prioritized. Give full play to the combined advantages of enterprise resources.

SGS also emphasizes the marketing work before putting it on the market. The value of the product is ultimately realized through marketing. Therefore, how to market should be considered from the initial stage of development. Before the completion of development, complete market analysis, formulate product goals, position core strategies and improve marketing. Program.

SGS recommends that enterprises formulate product innovation strategies. For enterprises, sustainable competitiveness is reflected in the continuous introduction of successful new products. The formulation of visionary product innovation strategies and product planning will help the development and decision-making of each new product.

  • PVM: Product Value Management Model

The idea of product value management (PVM) is based on the profit model, D. Lehmann and Crawford’s “Product Management”, and the SGS gate management system. Adopted by many small and medium-sized enterprises and world-renowned brand enterprises, PVM introduces the profit model and its design method in detail, focusing on customers, needs and markets, and taking competition and profit as the guide, from corporate vision, strategy implementation to product planning, focusing on products Management and product life cycle axis, discussing the whole process of new products from conception to commercialization, emphasizing value chain and value stream analysis based on business model, rational strategy and strict evaluation procedures are reliable guarantees for product innovation (development). .

1. The basic idea of PVM:

(1) Do the right thing – strategy determines direction, mode determines performance, emphasizes product planning and product management

(2) Do things right – process decision method, focus on product demand analysis, product planning, technology development and marketing mix management

(3) Doing the right thing correctly – ability determines success or failure, and believes that project management is the guarantee of success

2. The main core content of PVM:

(1) PVM attaches great importance to profit model and value chain analysis, and believes that “success is based on an excellent organization, and excellence comes from an extraordinary profit model”. Emphasizes product planning and product management, and raises the research focus from the specific product development level to the product value and strategy level.

(2) PVM also believes that effective product development process entry management and decision review are needed, and the product development process and market management process are organically integrated to reduce the waste of limited enterprise resources by worthless products.

(3) PVM highlights the coordination of product demand analysis, product concept and marketing mix in order to realize customer value and give full play to the combined advantages of enterprise resources.

(4) PVM emphasizes the core role of project management in product development, and advocates the implementation of product manager system for product management.

(5) PVM focuses on technology development platform construction, core technology development and cost value engineering, and believes that a systematic way of thinking is the correct way to improve R&D performance rather than KPI+BSC.

(6) PVM also believes that the enterprise is the core competitiveness of management, and advocates the R&D strategic alliance. The competition between enterprises will turn to the competition of product management.

Product Development and Technology Development

  • The difference between product development and technology development

The most important thing in product development is: focus on the needs of customers, and realize this demand quickly and at a low cost with technology or skills, which are not necessarily all created by oneself. Product development is market-driven , and product development cannot fail.

Technology development is a personalized creation process: in the early stage of product development, we often conduct market research first, then make technical predictions, and then make product plans for product development. This is a typical technology-driven development process. Focusing on technology and principles, is a creative process whose risks and cycles are unpredictable, and technology development is allowed to fail.

Product development and technology development are mutual input and output.

For a start-up enterprise, the capital investment is insufficient, and the survival of the enterprise is the first at this time. In order to survive, it is flexible to choose whether to accept product development or technology development. For an enterprise that intends to develop for a long time, it should try its best to develop technology. We should focus on product prediction, new technology development, and strive to make products that are ahead of the industry. If it can lead the market, it will bring more profits, so that the enterprise can achieve sustainable development.

  • Three Era of Product Development
  1. product era

The Product Era of Technology-Based Product Launch: Product-Centric

Traditional output process: start with resources and technology

Market environment: There is a shortage of products, and the era of seller’s market is based on product sales!

Applicable companies: mature products with universal service! In the era of fierce competition, technology must be irreplaceable and leading, and technology can form barriers.

Risk: After entering the era of competition, R&D will become a big cost pressure.

  1. The era of personalized service

Complete the era of personalized service that customizes products according to the individual needs of customers: customer-centric

Current and future output process: completely from the customer and the market, the best resources and technology outsourcing or leasing

Market environment: a customized buyer’s market!

Applicable products: companies with good channels and real system integration capabilities! The Marketing department is the company’s largest department, and technology and marketing and sales are olive-shaped.

Risks: Without technical and product management capabilities, it is possible that when developing products by yourself, all the profits will be lost.

  1. Marketing era

The Marketing Era of Combining Customer Needs with Existing Technology Platforms to Launch a Business: Profit-Centric

Present and future output process: start with customers and marketing, develop products based on shelf technology, separate product development from technology development

Market environment: technical shelves and product platforms have been initially established, and customer needs are diversified!

Applicable companies: semi-mature products with functional improvements or partial innovations or products with platforms. The Marketing department is very important, and the company’s R&D, sales and marketing relationship is a dumbbell-shaped structure.

Risk: If there is no special person to build the product platform and the demand cannot be controlled, a lot of repetitive development needs to be done, and the development cycle is uncontrollable, resulting in losses for the company.

product version

Product: Refers to the version delivered to the user. There are usually three definitions:

V version: refers to the platform version. R version: refers to the final product delivered to the user. M version: a customized version for specific customers based on the R version.

The difference between product (R) and product platform (V):

Table 3-1 Product version

 

Product(R)

Product Platform (V)

market range

Market segments

general market

development object

product package

Technology package

plan

business plan

R & D plan

release interval

short (month)

length (years)

object oriented

external customers

inside company

Build a SaaS product platform

Develop R products on the V platform

On the basis of the R product, the M version can be customized through the configurable and extensible SaaS.

V, R, M constitute the product development structure tree as shown in the figure:

Figure 3-21 VRM product tree structure

Research and practice show that the process is subordinate to the mode, and the mode determines the process; the mode is subordinate to the strategy, and the strategy determines the mode. They are also a typical collaborative supply chain relationship.

The business model exists in the whole process of production, operation and management of the enterprise, is related to the business performance of the enterprise, and supports the realization of the strategic goal of the enterprise. At present, in-depth research on the goals and methods of BMR (business model reorganization), the basic relationship between BMR and BPR (business process reorganization), and the real implementation are very necessary to promote enterprise management innovation and enhance enterprise competitiveness.

Below, we take product R&D as an example to analyze and discuss the goals and methods of product R&D model reorganization.

Status of product research and development: Authoritative data shows that 80% of the world’s research and development and 71% of technological innovation are created and owned by the world’s top 500 companies. The core technologies of many industries in my country still rely on foreign technologies. In 2004, the average R&D investment of China’s top 500 manufacturing enterprises was 190 million yuan, accounting for only 1.88% of the sales revenue of these enterprises.

At this stage, there are many problems in the product R&D system of Chinese enterprises, which seriously restrict the effective improvement of enterprise R&D capabilities and the rapid introduction of new products. Mainly manifested in the lack of strong R&D innovation awareness, insufficient R&D capabilities, inappropriate R&D strategies, unsound R&D institutions, few expert R&D talents, low R&D investment, long R&D cycle, high R&D cost, unreasonable R&D process and customer demand in most enterprises cannot be fully satisfied, etc. The core is that the enterprise independent innovation research and development system has not yet been established.

Target of product R&D model reorganization: Based on the analysis of the current situation, the goal of product R&D model reorganization is to quickly establish an independent innovation R&D system, improve R&D capabilities, shorten R&D cycle, and reduce R&D costs, so as to develop what customers really need and have independent knowledge New products with property rights and core technologies.

Product R&D model reorganization method: Based on the enterprise development strategy and reorganization goals, we first formulate the enterprise product R&D strategy. The core is to develop more products with independent intellectual property rights and core technologies by rapidly establishing an independent innovation R&D system of the enterprise, improving R&D capabilities.

The second is to develop a research and development strategy. That is to say, whether to adopt the strategy of self-developed R&D, domestic cooperative R&D, or domestic entrusted R&D, or to introduce foreign core technologies through joint venture projects and directly purchase foreign core technologies with foreign exchange, or to implement the “going out” strategy and directly acquire foreign companies. Strategies for acquiring core technologies and outstanding R&D personnel. The purpose is how to effectively track the world’s scientific and technological frontiers, acquire foreign core technologies, and rapidly improve R&D capabilities.

post-doctoral R&D workstations in China’s top 500 manufacturing companies . 和Create multi-level, multi-form R&D institutions.

The fourth is to establish a scientific employment mechanism to directly hire experts from home and abroad, especially professional leaders. At the same time, we must pay close attention to cultivating the R&D personnel of the enterprise, and form a reasonable echelon structure of R&D personnel as soon as possible.

The fifth is that the top 500 manufacturing enterprises in China should increase investment in research and development. Every year, strive to use an average of 3% of the main business sales revenue to invest in the establishment of the company’s independent innovation and research and development system.

The sixth is to establish a product collaborative research and development system based on information network. Integrate R&D technology, management technology and information technology to drive innovation in product R&D models and product design concepts.

The seventh is to speed up the design and implementation of all relevant processes and their supporting systems during the establishment of the independent innovation R&D system, so as to improve the role and efficiency of the process.

Build and accumulate your own development system

Complying with the regulations of the industry and having our own characteristics is our goal. Successful software companies have rich and reusable code components. A few lines of code may be insignificant in a single system, but once reusable across a large number of systems, they are valuable. Doing a single project is not necessarily profitable, but the cost of transforming into a new project with previous project experience and code is much less. Therefore, the software industry must establish its own knowledge base and accumulate it continuously, which will be an inexhaustible wealth.

Build a reusable knowledge base

  • Take advantage of development templates 

Using our own developed templates to assemble our general pages greatly reduces page design code and development code, and improves development efficiency.

This template includes page style control, common page-turning components, and common operation functions such as opening a page, deleting, adding, exiting, etc.

  • Control management 

We will always use the controls brought by Microsoft’s AjaxControlToolkit in the future . This group of controls basically includes all the controls we use. The main feature is that there is no refresh and good integration.

  • Common component management 

Component management is divided into components available in any project and components available in this project. These components are actually assemblies made up of various classes. We can refer to this DLL file for the compiled component.

Components available in any project are placed in the CommonLayer layer.

Unified management of common components: each method of common components is written in a standard format, and must provide the instance, parameters and return results of calling the method.

style design 

  • The role of Themes

Themes is another custom Web Site enhancement function of Asp.net v2.0. Its function is to set some properties of pages and controls. And these settings can be applied to the entire application, a single page or a single control.

A theme is a set of visual interface settings, including wallpapers, cursors, fonts, sounds, and icons. A theme is a collection of property settings that define the appearance of pages and controls, and then apply that appearance consistently across all pages in a web application, across an entire web application, or across all web applications on a server.

A theme consists of a set of elements: skins, cascading style sheets (CSS), images, and other resources. The theme will contain at least the appearance. Themes are defined in special directories on a website or web server.

  • Definition of Skin

Skin files have the file extension .skin and contain property settings for individual controls (for example, Button, Label, TextBox, or Calendar controls). Control appearance settings are similar to the control tags themselves, but contain only the properties you want to set as part of the theme. For example, here is the control appearance for the Button control:

Create a .skin file in the theme folder. A .skin file can contain one or more control skins for one or more control types. Skins can be defined in separate files for each control, or skins for all themes can be defined in one file.

There are two types of control skins – “default skin” and “named skin”:

When a theme is applied to a page, the default appearance is automatically applied to all controls of the same type. If the control skin has no SkinID property, it is the default skin. For example, if you create a default skin for the Calendar control, that control skin applies to all Calendar controls on pages that use this theme. (The default appearance is matched strictly by control type, so the Button control appearance applies to all Button controls, but not to LinkButton controls or controls derived from Button objects.)

A named skin is a control skin that has the SkinID property set. Named appearances are not automatically applied to controls by type. Instead, you should explicitly apply a named skin to a control by setting the control’s SkinID property. By creating named skins, you can set different appearances for different instances of the same control in your application.

  • Cascading Style Sheets

Themes can also contain cascading style sheets (.css files). When placing a .css file in the theme directory, the stylesheet is automatically applied as part of the theme. Define a stylesheet in the theme folder with the file extension .css.

  1. Create a new folder called App_Themes on the website.
  2. Note: The folder must be named App_Themes.
  3. Create a new subfolder of the App_Themes folder to hold the theme files. The name of this subfolder is the theme name. For example, to create a theme named BlueTheme, create a folder named \App_Themes\BlueTheme.
  4. Add the files that make up the theme’s skins, style sheets, and images to the new folder.
  • Create a look
  1. Create a new text file in the theme subfolder with the .skin extension.
  2. A typical convention is to create a .skin file for each control, such as Button.skin or Calendar.skin. However, you can create as many or as few .skin files as you want; a skin file can contain multiple skin definitions.
  3. In the .skin file, add the general control definition (using the declarative syntax), but only include the Properties to be set for the theme and not the ID attribute. The control definition must contain the runat=”server” attribute.
  4. Repeat steps 2 and 3 for each control skin you want to create.
  • Apply skins to controls

Skins defined in a theme apply to all control instances in an application or page to which the theme has been applied. In some cases, you may want to apply a specific set of properties to a single control. This can be achieved by creating a named skin (an item in the .skin file with the SkinID property set) and then applying it to individual controls by ID. For more information on creating named appearances, see How to: Define ASP.NET Themes.

Apply a named appearance to a control

Set the SkinID property of the control, as shown in the following example:

<asp:Calendar runat=”server” ID=”DatePicker” SkinID=”SmallCalendar” />

If the page theme does not include a control skin that matches the SkinID property, the control uses the default skin for that control type.

Configuration management

Using scientific configuration management ideas, supplemented by advanced configuration management tools, can easily solve the problems caused by management in the process of project development.

  1. List software configuration items required for each stage of software development, operation, and maintenance

The so-called software configuration items are many information items obtained in the progress of software development, such as work products, stage products, and tools and software used. Table 3-2 lists several types of software configuration items and their generation stages.

Table 3-2 Software configuration items

Classification

stage

example

Environment class

Software development environment or software maintenance environment

Compilers, operating systems, editors, database management systems, development tools, project management tools, documentation tools

define class

Work products resulting from the requirements analysis and definition phase

Requirements Specification, Project Development Plan, Design Criteria or Design Guidelines, Acceptance Test Plan

Design class

The work product obtained after the design phase

System Design Specifications, Program Specifications, Database Design, Coding Standards, User Interface Standards, Test Standards, System Test Plans, User Manuals

coding class

Work product after coding and unit testing

Source code, object code, unit test data and unit test results

maintenance class

Work products generated after entering the maintenance phase

Any of the above software configuration items that need to be changed

Only by clarifying which software configuration items are available at each stage can software companies be confident and confident when implementing software configuration management.

  1. Classify and supplement existing software configuration items to further improve software configuration

When software companies implement a certain software, they have different needs for different users. Table 3-3 is the working environment of different users:

Table 3-3 Working Environment

user

computer configuration

operating system

Backend database system

User A

PIV1.4G

WIN2000

SQL Server2005

User B

PIV3.5G

WIN2000

Oracle9.0

In order to meet the usage requirements of individual users, our software products must take these differences into account. When designing the product, we try our best to make the arrangement shown in 3-4 :

Table 3-4 List arrangement

user

Configuration items (modules)

User A

A module, b module, c module, e module, h module

User B

A module, b module, c module, f module, g module

In order to realize these two different software configurations, in actual development and application, we can develop each configuration item separately, and then combine them into different products according to the needs of users, as shown in Figure 3-22 :

 

Figure 3-22 Combining different users into different products

  1. Effective control and management of changes to software projects

Software enterprises are bound to encounter software changes in the process of software development, operation and maintenance. There are two main factors that cause software changes: on the one hand, users, such as users requesting to modify the scope of work and requirements, etc.; design. For the above two situations, software companies can solve them from the following aspects:

Identify who will implement the change on both sides

It should be clear in advance that the user has the right to apply for change of requirements and the software enterprise project development team has the right to accept the change, and the number of both parties should be controlled. The advantage of doing this is that it can constrain the demand side, so that each requirement raised by the demand side must be carefully discussed. When the project development team receives the user’s requirement change, after discussion with the personnel who have the right to implement the change, it can take into account the overall situation and change the related documents, procedures and plans involved.

Strict review of changes

Not all changes need to be revised, and not all changes need to be revised immediately. The purpose of the review is to decide if and when changes are needed. For example, when it comes to interface style issues, you can leave it unmodified first, or plan the time for modification and optimize it later. In addition, the modification of core modules should be strictly checked, otherwise it will cause global problems.

Assess the impact of changes

Changes come at a cost, and you should evaluate the cost of changes and the impact on the project, let users understand the consequences of changes, and make judgments with users.

Let the customer confirm whether to accept the cost of the change. In the process of evaluating the cost and discussing with the customer, you can ask the user to make a judgment together: “I can modify it, but can you accept the consequences?”, and list the consequences of the modification to the user one by one.

4. Effective management of software versions

In order to adapt to different operating environments, different platforms, and different users’ requirements, the software products developed by software companies lead to the production or evolution of different versions of the same software. Software enterprises can implement software version control through the following two common methods.

Number version identifier

Expressed numerically, as in the first edition, expressed as V1.0. The second version is denoted as V2.0. It is generally considered that V1.0 and V2.0 are the basic version numbers, and V1.1 and V1.2 are the first and second revisions to the basic version V1.0. Obviously these revisions are minor revisions. If there are major changes or global important changes caused by multiple revisions, the version number should be increased, such as V2.0. The number version identification can be shown in Figure 2:

Figure 3-23 Number version identification

Symbol version designation

This version notation is to extract the important information of the version. For example, V1/VMS/DB SERVER represents a version of the database server running on the VMS operating system. For software enterprises, it can be represented by ” personnel management system stand-alone version ” , ” personnel management system network version ” and so on.

Implement effective configuration auditing

Software enterprises can carry out configuration auditing from the following two aspects:

” Configuration Management Activity Audit “

” Configuration management activity review ” is used to ensure that all configuration management activities of project team members follow approved software configuration management policies and procedures, such as frequency of check in/check out, work product maturity The principle of degree improvement , etc.

” Baseline Review “

To ensure the integrity and consistency of the baselined software work product, and to meet its functional requirements. The completeness of the baseline can be considered from the following aspects: Does the baseline library include all planned configuration items? Is the content of the configuration item itself in the baseline library complete? (For example, do the references or references mentioned in the documentation exist?) Also, for code, check against the code listing that all source files already exist in the baseline library. At the same time, compile all the source files to check whether the final product can be produced. Consistency mainly examines the consistent relationship between requirements and design, as well as design and code. Especially when changes occur, it is necessary to check whether all affected parts have been changed accordingly. Non-conformities found in the audit are to be recorded and tracked until resolved.

In practice, auditing is generally considered an after-the-fact activity that is easily overlooked. However , ” after the fact ” is also relative, and the problems found in the audit at the early stage of the project always have guidance and reference value for the later work of the project. In order to improve the effectiveness of the audit, a checklist should be fully prepared, as shown in Table 3-5 .

Table 3-5 Checklist

checklist

Yes

NO

illustrate

Whether to check in and check out in time

   

Whether to perform regular backups of the configuration repository

   

Whether to periodically check the configuration system for viruses

   

Whether the non-conformances from the last review have been resolved

   

Whether to conduct regular audit work

   

Whether to set up a configuration review team

   

6. Select the configuration tool

Software companies choose business configuration management tools, you can consider the following factors.

Tool market share

What everyone chooses is usually the better one. Moreover, a high market share usually indicates that the company’s operating conditions will be better, and it is less likely to be acquired or closed down.

Features of the tool itself

The tool itself has stability, ease of use, security, scalability, etc. You should try out and evaluate the tool carefully before investing. What is easier to ignore here is the scalability of the tool (Scalability). You may only deploy this tool in a team of a few people or a dozen people, but in the future, there may be dozens or hundreds of people who will rely on this tool to build. If it works on the platform of the company, can this tool provide such support capabilities at that time? If the time comes to change a tool, you will regret your choice today.

abstract object model

The abstract object model provides a business public platform for enterprise-level application systems, extracts the common business of government and enterprise applications, and forms a general business information system. Based on this level, it is used to construct, integrate and run government and enterprise information systems. , to reduce repetitive development during enterprise application development.

Based on a reconfigurable abstract object model, these classes contain both complete methods inherited and used by application developers, as well as abstract definitions of methods that may be implemented by developers of application business objects. Application developers can use this object model to build object-oriented applications and frameworks.

The abstract object model provides the following features:

  • Custom Business Object Properties
  • Variable business logic
  • Uniform Object Unique Identifier
  • Object Oriented Design Patterns
  • / filter scheme by object properties

model driven

MDA (Model Driven Architecture) is a model-driven architecture, which is a software development framework defined by OMG. It is a framework based on UML and other industry standards that supports the visualization, storage and exchange of software designs and models. Compared with UML, MDA can create machine-readable and highly abstract models, which are independent of implementation technology and stored in a standardized way. MDA uses the modeling language as a programming language rather than just a design language. The key to MDA is that models play a very important role in software development.

MDA derives from the well-known idea of separating the specification of system operation from the details of how the system utilizes the capabilities of the underlying platform. MDA provides a way (through related tools) to normalize a platform-independent system, normalize the platform, and The system selects a specific implementation platform and translates the system specification to the specific implementation platform. The three main goals of MDA are: portability, interoperability, and reusability through architectural separation.

Model-Driven Architecture (MDA) is a new technology system that OMG has been hyping up in recent years, and it is also a new hot spot for many researchers engaged in software modeling. The core idea of MDA (Model Driven) is to study the business model (such as enterprise informatization or solutions in the field of construction). Then, a relatively core domain model is extracted, and a PIM (Platform Independent Model) is abstracted. Afterwards, according to different development platforms (such as .net or J2EE), the application platform (windows or unix) forms the corresponding PSM (Platform Dependent Model). According to the corresponding tools, such as ArcStyler, the corresponding code and software system can be completely generated. Of course, here is just a general idea and method.

  1. The MDA theory is still in an exploratory period, many theories and methods are immature, and of course there is no mature tool. From the current trend, the theoretical and practical tools are far from the expectations proposed by the OMG organization. The distance, at least a few years away, will take shape.
  2. At present, both foreign open source organizations and some domestic organizations are only in the initial stage of MDA. Many people’s so-called application of MDA is actually just an initial exploration and attempt in the MDA system. For example, ORM realizes the exploration of MDA in database application at a certain level, but it only solves the problem of entity model mapping. A few days ago, an interviewer used ArcStyler4.X to make an application model of a bank POS system, and generated a little frame code that needs to be modified. Just tell me that he has mastered the MDA, and the level of his level really embarrassed me! admire!
  3. The first hot spot of MDA may be the bridge, and in the field of MDA, mapping is a very important point, and transformation and interaction are just extensions of this point.
  4. For now, the language most likely to be implemented in the MDA system is JAVA, although I hate some of JAVA’s stupid ways.
  5. The core of MDA is PIM, because it is the most abstract and synergistic. At the same time, in terms of the current situation, PIM is also a bottleneck! At the same time, as far as the current UML2.0 (the latest one obtained from OMG) is concerned, it is not enough as the language for establishing the entire MDA system. At the same time, it seems that some definitions in MOF still need to be improved. Because for the entire system, MOF should be used more as a standard, and only when the standard is mature can it be possible to generate correct mapping rules.
  6. Until the day when MDA is full of glory, some programmers will be unemployed, but not all of them. At least MDA tools must be made by someone, because one MDA tool is not enough to deal with all fields. It’s like there is no one financial system that works for all businesses. Because the standardization in each field is different.
  • MDA’s process

The implementation of MDA mainly focuses on the following three steps:

  1. First, you use UML to model your application domain at a high level of abstraction, and this model has absolutely nothing to do with the technology (or underlying technology) that implements it. We call this model the Platform Independent Model (PIM).
  2. The PIM will then be transformed into one or more Platform Dependent Models (PSMs). This translation process is generally automated. PSM will describe your system with a specific implementation technique. It will use the various frameworks provided by this technology, such as EJB, database model, COM components and so on.
  3. Finally, PSM will be translated into source code. Because each PSM already relies entirely on a specific technology, this step is generally relatively straightforward.

The hardest step in the MDA process is generating a PSM from the PIM. It requires you to have a rich and solid knowledge of the underlying technology you are applying, on the other hand the source model (PIM) must have enough information to automatically generate the PSM.

  • Generated by template: MDA-light?!

In the practical application of MDA, an easier implementation is through templates (we call it MDA-light). In this way, the platform-dependent model step can be said to be skipped, and you can generate source code directly from the highly abstracted PIM. You will continue to do real programming on the basis of MDA-light: you must write the detailed application logic in source code, not UML.

  • Prerequisites for using MDA

It is a widely accepted fact in the industry (and even the world) that only change is permanent. Technology is always innovating. This is especially evident in the middleware space, and of course database technologies, operating systems, and even programming languages change frequently. These technologies obviously change faster than the basic concepts of the application domain.

If you work in a specific application area, projects in that area all share a certain similarity. If an entire application family or different projects belong to the same application domain, then MDA or the generation process will be especially suitable for you.

  • Advantages of MDA

Your investment in modeling will be more lasting and effective — far longer than the technology you currently use to achieve it. This will better protect your investment.

You have technical flexibility.

You will no longer be affected by the different cycles of change that a technology or application has – with the help of MDA, you can maintain diversity in both directions neutrally.

  • Disadvantages of MDA

MDA means more “assembly” than “development” — you basically have no technical wiggle room when building PIM for an application. This is still unimaginable for many developers today.

The creativity of software development has diminished to a certain extent. Developers often find it fascinating to argue about a new technology and work on the cutting edge of it. However, under the MDA process, a lot of work is to build models, which is far from the specific technology, but in line with OMG’s recommendations.

Potential immaturity. UML2.0 is still in its infancy. MDA tools have also been around for a relatively short time. There is also a lot of risk hidden here.

  • Problems to be solved in MDA process and generation development

Migration of data and applications: A problem that is often faced in the business world today is how to migrate large amounts of data and applications to new, MDA-based systems. A pure MDA process would treat the data model and database table structure as technical details. They shouldn’t have any impact on the Platform Independent Model (PIM) layer — so, is your MDA tool or generator responsible for generating database scripts as well?

Software maintenance: The preparation of different releases, patches or upgrades is an important part of maintaining a currently running program. How does MDA deal with these problems? Doing a fresh install every time?

Return-on-Investment: What environment and system to start with? Second project from applying MDA? Or start with the fifth?

Generators and related tools create a dependency on their producers — a dependency on producers that we’ve tried so hard to avoid in the past.

Enterprise Application Integration (EAI): A high level of abstraction, sounds good — but how do you get that abstraction for an application that’s already running?

You can see — potentially a lot of practical questions (all of which have important answers). These questions are why we created openMDA: in many projects, some of the above questions have been experimentally answered, and you (and us) will all benefit from them!

  • MDA’s Software Development Cycle

The software development process in MDA is driven by the modeling behavior of the software system. The following is the software development cycle of MDA:

The MDA life cycle is not much different from the traditional life cycle. The main difference is the artifacts created by the development process, including PIM (Platform Independent Model, platform independent model), PSM (Platform specific Model, platform dependent model) and code. PIM is a model with a high level of abstraction, independent of any implementation technology. PIMs are converted to one or more PSMs. PSM is tailored for a specific implementation technology. For example, the EJB PSM is a system model expressed in EJB structures. The final step in development is to transform each PSM into code, which is closely related to the application technology. The traditional development process transformation from model to model, or from model to code, is done manually. But the transformation of MDA is done automatically by the tool. From PIM to PSM, and from PSM to code can be implemented by tools. PIM, PSM, and Code models are used as design artifacts in the software development life cycle, in the traditional way of development are documents and diagrams. Importantly, they represent different levels of abstraction to the system, looking at our system from different perspectives, and the ability to convert high-level PIM to PSM raises the level of abstraction. It enables developers to understand the entire architecture of the system more clearly, without being “polluted” by specific implementation technologies, and reduces the workload of developers for complex systems.

The emergence of MDA points out the solution for improving the efficiency of software development, enhancing the portability, interoperability and maintainability of software, and the convenience of documentation. MDA is predicted by the object-oriented technology community as the most important methodology for the next two years. The main problem with modeling today is that for many businesses it’s just a paper exercise. This creates a problem that the model and the code are out of sync, the code will be constantly modified, but the model will not be updated, so the model loses its meaning. The key to bridging the gap between modeling and development is to make modeling an integral part of development. MDA is a framework for model-driven development, and the vision of MDA is to define a new way of describing and creating systems. MDA makes UML useful beyond just beautiful pictures. Many experts predict that MDA may lead us into another golden age of software development.

  • MDA framework

MDA separates the model of the software system into a platform-independent model PIM and a specific platform model PSM, and at the same time unifies them through transformation rules, trying to get rid of the dilemma caused by the change of requirements in this way. The platform-independent model PIM is a high-level abstraction of the system, which does not include any information related to the implementation technology; the platform-specific model PSM is a platform-specific model. In the MDA framework, a platform-independent modeling language is used to build a platform-independent model PIM, and then according to the mapping rules of the specific platform and implementation language, the PIM is converted to generate a platform-dependent model PSM, and finally the application code and test are generated. frame.

The “building materials” of the MDA framework include: high-level models; one or more standard, well-defined languages in which to write high-level models; definitions of how to transform PIM into PSM; the language in which these definitions are written, the language Can be executed by a transform tool; a tool capable of executing transform definitions; a transform tool capable of executing PSM to code.

The figure above is the framework of MDA, and its main elements are model, PIM, PSM, language, transformation, transformation definition, and transformation tools. MDA is an open, software-vendor-neutral architecture that broadly supports different application domains and technology platforms, and can be a lever between application domains and specific technology platforms. In the MDA development approach, PIM stands for modeling of requirements, and PSM stands for model after application of specific technologies, which makes MDA a lever between requirements and technologies; their respective changes can be independent of each other and will not cause commercial The tight coupling of logic and implementation technology, while MDA can bridge the gap between them through transformation, thus protecting our investment. The MDA development approach enables our system to be flexibly implemented, integrated, maintained and tested. The portability, interoperability and reusability of the system can be maintained for a long time and can cope with future changes.

  • Status of MDA

MDA is still in a process of development, and MDA is still evolving. Although MDA is coming vigorously, people can also see its problems. The biggest benefit of MDA is the lasting value of the business model, but at the cost of adding layers of abstraction, and it seems that the transition between layers is not as smooth as we expected, at least, from PIM to PSM, from PSM to code, which is far more difficult than generating machine code from 3GL. In terms of modeling technology, UML is exposing its inherent defects. It needs to expand more mechanisms to support accurate modeling and analysis models. Although OCL currently provides certain support for accurate modeling, this support is far from executable. The ideal of the model is still far away. Looking back on the history of MDA, we can see that the great success of UML has laid a solid foundation for the emergence of MDA, and we also feel that in the long road from software technology to software engineering, MDA is only a step forward. It is a small step, but it has caused a wave of waves in the entire software industry. It will have a profound impact on future IT technology in many aspects such as model definition and development process.

The current situation in the MDA development tool market is: as the standardization of the conversion method from PIM to PSM has not been completed, large manufacturers such as IBM and Borland are mostly cautious, although they also provide some MDA functions in their development tools, but Does not fully follow the MDA specification defined by the OMG. Even so, in addition to adding MDA functions in Rational, IBM also proposed EMF (Eclipse Modeling Framework), an innovative MDA code generation system project in the open source project Eclipse. The emphasis on technology. Borland announced that they are also focusing on MDA technology, and are ready to configure the automatic MDA-based model generation function in Together. Compared with the calmness and restraint of the big manufacturers in the industry, some small and medium-sized manufacturers are particularly active, such as the famous ArcStyler of Interactive Objects, the famous OptimalJ of Compuware, and the open source AndroMDA and other MDA tools that follow the OMG standard specifications have been used in some projects. It has been widely used and achieved remarkable results.

  • MDA related standards

In order to realize the grand vision of MDA, OMG has developed a series of standards:

UML: UML is used by MDA to describe various models. It is not born for MDA, but as the most popular modeling language at present, UML has occupied 90% of the global modeling language field and has become the de facto standard of modeling language, so OMG regards it as MDA technology The foundation is a natural, sensible choice. It is the foundation of MDA and its most powerful weapon.

MOF: MOF (Meta Object Facility) is a higher-level abstraction than UML. Its purpose is to describe UML extensions or other possible future UML-like modeling languages. Although MOF is not born for MDA, we can appreciate the good intentions and long-term vision of OMG engineers.

XMI: XMI (XML-based metadata Interchange) is an XML-based metadata exchange. It defines an XML-based data exchange format for various models through a standardized XML document format and DTDs (Document Type Definitions). This allows the model as a final product to be delivered in a variety of different tools, which is very important to ensure that MDA does not break one constraint and then add a new constraint.

CWM: CWM (Common Warehouse Metamodel) provides a means of data format transformation. CWM can be used on any level of model to describe the mapping rules between two data models, such as converting data entities from relationships. The database is transformed into XML format. Under the framework of MOF, CWM makes a general data model transformation engine possible.

In the blueprint of OMG, a series of standards such as UML, MOF, XMI, and CWM respectively solve the problems of model establishment, model extension, model exchange and model transformation of MDA. OMG attempts to expand the scope of application of MDA through standardized definitions. At the same time, through such an extensible modeling language environment, IT manufacturers can freely implement their own modeling language and the mapping of language to executable code, but no matter what, they must be under the standardized framework of OMG.

summary

This chapter introduces the SaaS development model. Through the introduction of the key technologies for realizing SaaS software, let us have a purposeful understanding of this aspect. The product line production of the software factory originates from the traditional manufacturing industry. Whether the assembly line operation in the manufacturing industry can be applied in the software industry still faces some problems, but it is not impossible to truly realize the factoryization of software. When it comes to development, the system architecture must be indispensable. The software architecture is mainly a layered problem of software. In this section, both .net and J2ee are used to illustrate the architecture of the software through examples. Product development is not only a technical road but also an enterprise

business decision-making issues. Different companies can adopt effective and optimal R&D models according to the actual situation of the company. Establishing and accumulating your own development system will help you reuse code and greatly reduce development costs.

Introduction

The real question is not whether computers have the ability to think, but whether humans have this ability

________B.F. Skinner, Computer Science

SaaS model is different from traditional software not only in operational services, but also in software development methods and technologies.

How to develop SaaS software and what technologies will be used to develop SaaS software are the main contents of our research.

Key Technologies for Realizing SaaS Software

SOA technology

call SOA and SaaS twin sisters. SOA and SaaS are two carriages in the field of modern software services. They are running fast and keeping pace .

Service-Oriented Architecture ( SOA ) was first proposed by Garnter in the late 1990s , emphasizing the importance of services. Most domestic consumers gradually know and understand it through the propaganda of IBM , the leader in the SOA field .

With the passage of time, application software developers have become more and more involved in the field of SOA , and it is no exaggeration to say that SOA has become ubiquitous. As SaaS becomes more and more popular and SOA continues to deepen, in December 2007 , Microsoft took the lead in proposing the “software + service” ( S+S ) strategy in the industry, aiming to get through the “internal business integration ( SOA ) + external business development ( SaaS ) + rich user experience” and other multiple resources, organically combine “software” and “services” to maximize the value of IT , and achieve “have both the cake and the paw” of SaaS and SOA .

According to the definition made by Microsoft in a technical white paper, “software + service” is an ” IT umbrella”, which integrates many existing IT technologies and theories, including SaaS , SOA and Web2.0 . With different manufacturers entering from different entry points, the entire IT industry is holding up the umbrella of ” software + services ” and heading towards the future of IT .

” The increasing complexity of the IT environment has made people’s demand for technology products continue to increase. The technology development trend in the next 10 years has shown that a single, modeled technology product or service will not meet the needs of social and economic development, and the global technology ecosystem will It will develop healthily in the direction of diversity, dynamics and service.” Donald Ferguson , a Microsoft academician and a member of the Microsoft CTO Office , believes that in the field of services, users can try before buying and pay on demand ; in the field of software, users have complete control — customization, one-time payment, and use as long as they want. If the user chooses the pure software or pure service approach, in fact, it means giving up the other advantages. ” S+S ” can solve this problem very well. The concept of ” S+S ” addresses the various needs of users, either to choose to obtain services, to continue to own software, or to have both.

” SOA is also very important for those software vendors that carry out SaaS “. The reason, said Dana Gardner , principal analyst at Interarbor Solutions , is that SOA can help it deliver application software more efficiently. Moreover, they gain a competitive advantage in terms of price compared to traditional packaged application software vendors.

Microsoft China Chief Technology Officer 李志霄Dr. said that software and services play complementary roles in ” S+S “, and 2008 will be an important year for Microsoft to step up its ” S+S ” strategy. According to Liu Qinzhong, director of SAP Business ByDesign , SAP will also change its face in 2008 , and expand new SaaS channels with SOA architecture products , so as to gain the dual benefits of SaaS and SOA .

cloud computing technology

As a new sales method of application software, SaaS has begun to flourish, but with the growth of SaaS software customers, basic resources such as network storage and bandwidth will gradually become the bottleneck of development. For many enterprises, their own computer equipment The performance may never be able to meet the demand, a simple solution is to purchase more and more advanced equipment, and then the equipment cost will increase sharply, and the profit will decrease. Is there a more cost-effective solution? ” Cloud computing ” The emergence may open a gap in the door for the solution of this problem.

Cloud computing is an emerging method of sharing infrastructure based on the Internet, usually some large server clusters, including computing servers, storage servers, broadband resources, and so on . It utilizes the transmission capacity of high-speed Internet to move data processing from personal computers or servers to server clusters on the Internet. These server clusters are managed by a large data processing center. The data center allocates computing resources according to the needs of customers. Connect huge pools of systems together to provide various IT services. to achieve the same effect as a supercomputer. Cloud computing centralizes all computing resources and manages them automatically by software without human involvement. This allows companies to focus more on their own business without worrying about cumbersome details, which is conducive to innovation.

Usually, SaaS providers focus more on software development, but have weak management capabilities for network resources, and often waste a lot of money to purchase infrastructure such as servers and bandwidth, but the user load provided is still limited, and cloud computing provides a A simple and efficient mechanism for managing network resources, which allocates computing tasks, rebalances workloads, dynamically allocates resources, etc., can help SaaS vendors provide unimaginably huge resources to a large number of users, SaaS vendors can no longer server and They waste their own resources on infrastructure such as bandwidth, and focus on specific software development and applications, so as to achieve a win-win situation for end users, SaaS, and cloud computing.

It can be seen that cloud computing has considerable potential in the enterprise software market, and it is also a great opportunity for SaaS suppliers. They can choose cloud computing platforms, use cloud computing infrastructure, and use their low prices for massive user base to provide more stable, fast and secure applications and services.

To quickly grasp the concept of cloud computing, we can use the concept of the cloud on the network architecture diagram to make an analogy. In the network architecture diagram, the Internet connection structure is usually hidden by the cloud, so there is no need to understand the complexity of the connection, but can communicate with simplified concepts; the concept of cloud computing is to reduce the complexity of the computing system. It is hidden, so that developers do not need to understand the system architecture that provides computing resources, as long as the computing data is thrown into the system, and finally the system will return the result.

Cloud technology can be regarded as a subset of grid technology. The purpose of both is to hide the complexity of the system so that users can use it without knowing how the system works.

Ajax technology

Ajax (Asynchronous javascript and XML) is a set of technologies for developing web applications. It combines programming technologies such as JavaScript , XML , DHTML and DOM , allowing developers to build web applications based on Ajax technology and break the use of page reloading. ‘s practice. It enables browsers to provide users with a more natural browsing experience. Modifications to client web pages are asynchronous and incremental whenever an update is required. In this way, AJAX greatly improves the speed of the user interface when submitting web page content. In AJAX based applications there is no need to wait long for the entire page to refresh. Changes are made only to those parts of the page that need to be updated, and updates are done locally and asynchronously if possible. While allowing users to enjoy SaaS application services, partial page refresh can be achieved, and using browser-based B/S software is as accustomed and smooth as using traditional C/S software. Applications like Ajax are increasingly being used in the software industry through SaaS.

WebService technology _ _ _

Web Service is a component integration technology based on HTTP, with SOAP as the lightweight transmission protocol, XML as the data encapsulation standard.

Web Service is mainly an interface proposed to enable the information between the original isolated sites to communicate and share with each other. Web Services use unified and open standards on the Internet, so Web Services can be used in any environment (Windows, Linux) that supports these standards. Its design goals are simplicity and extensibility, which facilitates interoperability between a large number of heterogeneous programs and platforms, so that existing applications can be accessed by a wide range of users.

Soap technology is the core of Web Service. It encapsulates data packets in XML standard format, and the encapsulated communication information is expressed in text and follows standard encapsulation rules. This means that any component model, development tool, programming language and application system can use this technology smoothly as long as data in XML and text formats are supported. Now all component models, development tools, programming languages, application systems and operating systems support XML and text formats, and of course, Soap can be fully supported.

In SaaS software, Web Service provides a mechanism for components to communicate with each other. Web Service technology will greatly improve the expansibility of the system and enable seamless integration of application systems of various platforms and development tools. At the same time, Soap, which is the core of Web Service technology, is an open standard protocol; it not only breaks through application barriers, but also can combine enterprise firewalls and internal information systems, while providing a secure and integrated application environment; allowing enterprises to encapsulate any custom information, Without the need to modify the source code of the application system, it provides strong system flexibility.

single sign-on technology

One of the basic requirements for the ease of use of modern web applications, at least within our system, is that the user can access all the subsystems he has access to with a single login.

Single Sing On (single sign-on) is to achieve automatic access to all authorized application software systems through a single login, thereby improving overall security, and there is no need to remember multiple login procedures, IDs or passwords.

In a WebService environment, single sign-on plays a very important role. In the WebService environment, various systems need to communicate with each other, but it is impractical to require each system to maintain each other’s access control lists. Users also need a better experience to use the different systems involved in a business process without the need for cumbersome multiple logins and authentications. In the single sign-on environment of WebService, there are also such systems, which have their own authentication and authorization implementations. Therefore, it is necessary to solve the problem of mapping the user’s credentials between different systems, and to ensure that once a user is deleted, the The user will not have access to all participating systems.

SAML is a standard for encoding authentication and authorization information in XML format. A Web Service can thus request and receive SAML assertions from a SAML-compliant authentication and authorization service, and authenticate and authorize a service requester accordingly. SAML can be used to transfer credentials between multiple systems and is therefore used in single sign-on scenarios.

Product Line Production in Software Factory

The economic and technical problems that hinder the transition from technology to production can be overcome by applying important new approaches that take a new approach to dealing with complexity and change. These new approaches also exist today and show clear potential for commercial products, although most of them are immature. Mainly in four aspects: system reuse, assembly development, model-driven development, process framework. Let’s consider them one by one.

  • system reuse

One of the most important new approaches in software development is to define families of software products whose members vary but share many common characteristics. Like Parnas, such a clan provides an environment in which problems common to members can be solved collectively. By identifying and distinguishing features that are more or less present in multiple products and those that vary, we can take a systematic approach to reuse. A software product family consists of components or entire products. For example, a family should contain different application investment management, including different user management frameworks, which are used by application investment management and user relationship management applications.

Software product families are developed by system integrators (SIs) to migrate applications from one user to another, or to improve existing applications to create new ones. They also develop software product families through independent software vendors, develop regional multi-applications like CRM, or multi-version applications through maintenance and improvement. They also develop software product families through IT organizations, improve existing applications, develop multi-relational applications, or multi-version applications through maintenance and improvement.

  • The practice of software production line

Software production lines develop families of software products that make the development of members of the family of software products faster, cheaper, and less risky by identifying common features and filling out forms for changes in specific areas. Rather than relying on temporary reuse, they systematically capture knowledge of how to develop family members, making it possible to reuse assets and utilize those assets during family member development. Developed as a family of products, requirements, architectures, frameworks, components, tests, and other assets can be reused.

Of course, there is a cost to developing a production line. In other words, the production line embodies the classic cost-benefit trade-off. The benefits on one side of the equation cannot be increased by producing many copies in a market that supports limited releases, but can be increased by producing many related, unique products, as described in many case studies [CN01]. Utilizing the software production line is the first step towards software industrialization. Making them cheaper to create and run is the second step. Figure 3-1 depicts the execution of major tasks, workpiece production, and utilization on a production line.

Figure 3-1 Software production line

Production line developers use development assets to develop software family members in the same way that platform developers create device drivers and operating systems for use by application developers. An important step in developing product assets is to develop one or more regional models that describe common problem characteristics provided by the production line and describe different tables. Together, these models define the scope of the production line and are used to define expected software family members. The requirements of software family members are derived from these models, providing a way to relate changes in requirements to changes in the architecture, implementation, execution, development process, the project environment, and other parts of the software life cycle. .

  • Model Driven Development

Raising the level of abstraction is an important process that reduces the scope of abstraction and therefore less developer control when implementing it. Loss of control is a corresponding increase in power. Most commercial application developers, for example, would rather use higher-level abstractions like these with C# and the .NET framework, rather than assembly languages and system calls. Higher levels of abstraction yield many benefits, including higher productivity, fewer defects, and easier maintenance and improvement.

Unfortunately, we see that raising the level of abstraction and tools is very expensive. If only we could find some way to make it faster, cheaper, easier, but we could provide a higher level of automation for small problem domains. This is the goal of Model Driven Development (MDD). Model-driven development utilizes model-driven capture of high-level information, often informally expressed, automatically implemented, or performed by compiling models, or by making it easier for human development to perform. This is important because information is currently not found in low-level artifacts, such as source code files, making it difficult to track, maintain, and continuously improve.

Some development activities, such as building, configuring, and debugging are currently partially or fully automated by leveraging information captured from source code files and other implementation artifacts. Using the information captured through the model, MDD can also provide more scalable automation activities, and more automated optimization tables, such as model debugging and automatic configuration tools. Here are some examples:

  • Routine tasks, such as producing one thing from another, can often be fully automated. For example, test harnesses can often be automatically produced from user interface mockups that make transitions between pages to simulate user activity.
  • Other tasks, such as resolving differences between artifacts, can be partially automated. For example, table columns and form fields may be full of problems to be solved by the user, and then automatically corrected at the user’s discretion.
  • Adapters, such as Web service wrappers, can be automatically generated from the difference between models and bridges in the implementation technology. Models can also be used for representation, protocol configuration, and other adaptive integration mechanisms.
  • Models can be used to define the configuration of artifacts, which are composed of hives to automate the configuration process. The configuration environment of the model can be used to constrain the design so that it can be implemented correctly.
  • Models can be used to describe the configuration of configuration components, capturing information about operational characteristics such as download balancing, failure recovery, resource allocation policies, automated management activities, data collection and reporting.
  • domain-specific language

For the sake of MDD, we are no longer interested in some end-of-line languages like 4GLs, nor in a high-level language to implement all aspects of development. The weaknesses of these strategies have been well documented. We are also no longer interested in models presented at conferences, and notes. Unfortunately, models are often used to document humans rather than computers. These create the impression that the model is not the first type of development artifact in source code. We are interested in using tools to work with models, and we plan to use them in the same source code way. In this way, the model of the document design cannot be expressed in language. Models must be accurate and unambiguous. At the same time, to increase the level of abstraction, modeling languages must focus on small areas rather than a general-purpose programming language. There are the following requirements:

  • The goals of language design must be clearly stated so that reviewers familiar with the domain can evaluate the language and decide whether it achieves its goals.
  • The language must enable people working in the field to capture business concepts. The language used to develop and assemble Web services must include concepts such as Web services, Web methods, protocols, and protocol-based connections. Likewise, a language used to visualize and edit C# source code must contain concepts (like C#) such as classes, members, fields, methods, properties, events, and delegates.
  • The language must familiarize its users with the names of its concepts. For example, a C# developer finds a model of a class with fields and methods more natural than a model of a class with properties and operations.
  • Language symbols, pictures or words, must be easy to use to solve problems. The things people do on a daily basis must be easy to express in concepts. For example, it must be easy to manipulate an inheritance with a visual and C# source code editing language.
  • The language must have a set of well-defined rules, called a grammar, that govern the expressions that make up concepts. This makes it possible to use tools to check whether expressions are correct, while helping users write concepts.
  • The semantics of each expression must be well defined so that users can create models that others understand, tools can generate legitimate implementations from models, and metadata captured from models can do what they expect when used to process tasks things like configuring the server.

A language that meets these criteria is called a domain-specific language (DSL) and should be modeled for those domain-specific concepts. DSLs are stricter than general modeling languages. Like a programming language, it also has text or picture symbols. SQ and HTML are two examples of DSLs, defining relational data and defining services for web pages, respectively.

Figure 3-2 , an example of two diagrams illustrating the DSL, is a screenshot of the Microsoft Visual Studio 2005 Team System. The DSL on the left describes components, like web services. It is used to automate component development and configuration. The DSL on the right describes the logical service types in the data center. It is used to design and implement data center configurations. Web services are developed by dragging service components onto logical servers. The difference between resource requirements and availability on logical servers is full of validation errors and diagrams.

Figure 3-2 Domain-specific languages

  • Incremental code generation

The key to efficient code generation is generating less conceptually distinct code. This allows tools to take advantage of platform features and produce centralized, efficient, platform-specific implementations. One way to add more code generation is to bring the model closer to the platform, as shown in Figure 3-3 . For example, a specialized programming language defined with a programming language type system can achieve more realistic modeling than a modeling language defined with a type system. This model now becomes a code view where the developer graphically manipulates the program structure like manipulating the class and method definitions. This tool embodies relationships and dependencies that are hard to see in code, saving time and effort in generating code for program structure. It enables programming styles like relational collection-based, or provides advanced features like reproduction and schema construction, application, and evaluation.

Figure 3-3 SaaS operator relationship group

Of course, by limiting the abstraction to the platforms available, this diminishes the role of modeling, or does not act like programming style. So how do we work at a higher level of abstraction? We use a more abstract model, framing or transforming the platform and the model closer, as shown in Figure 3-4 . Let’s look at these one by one.

Figure 3-4 Programming language modeling

Use high-level abstractions

  • We can use frameworks to implement high-level abstractions in modules, and use these modules to generate small pieces of code at framework extension points. Instead, models help users complete framework extensions by visualizing framework concepts and embodying them in an intuitive way. Building graphics applications, for example, is difficult when getting started with Microsoft’s operating systems. Subsequently, Microsoft Visual Basic made it easier to use graphics through form and control concepts.
  • Instead of schema or model languages, we can create lower-level DSL description languages. In order to lead this revolutionary change, we can also utilize more than two DSL description languages to span a wider span. Models described in the highest-level DSL language can be transformed into executable software through refinement, as shown in Figure 3-4 . shown. This explains how compilers work, how high-level languages like c# and java are converted into intermediate code like bytes or IL, which is JIT-compiled into the target platform’s binary format.
  • composition mechanism

Of course, the handwritten code must usually be combined with the framework code to produce a complete executable program. A few different mechanisms can be used to do these things. The important difference between them is to help set the time.

Figure 3-5 Composition of Design Time

Two advantages of runtime binding are that it combines handwritten code with framework code through interfaces, allowing dynamic configuration through object substitution. At the same time, delegating classes allow handwritten code to be protected through regeneration. A minor disadvantage is that the runtime is often method calls. In the component programming model, some runtime binding-based mechanisms are very popular, as shown in Figure 3-6 . They have all been very successful in large-scale commercial products.

  • Before compilation, design time in the same artifact is primarily both handwritten code time and framework code time, as shown in Figure 3-5 . This includes constraints on the editing experience (eg, editors with read-only areas) to avoid users modifying framework code. In other tools, users add handwritten code in a special window. Runtime binding merges handwritten code and framework code with asynchronous callbacks. An agent-based runtime binding mechanism is described by a design model, such as the following from Gamma, et.al.: events (Observer), adapters (Adapter), policy objects (Strategy), factories (Abstract Factory), orchestration (Mediator), wrappers(Decorator), proxies(Proxy), commands (Command)?and?filters(Chain of Responsibility) [GHJV95]. Two advantages of runtime binding are that the interface enables handwritten code to be combined with framework code Up, allowing dynamic configuration through object replacement. At the same time, delegating classes allow handwritten code to be protected through regeneration. A minor disadvantage is that the runtime is often method calls. In the component programming model, some runtime binding-based mechanisms are very popular, as shown in Figure 3-6 . They have all been very successful in large-scale commercial products.
  • Handwritten SUB class. The user provides handwritten code in the SUB class in the framework. An abstract method in the framework code defines the display coverage point. For example, the user writes a subset of framework entities, the domain references the handwritten code through the template method pattern, and highlights function calls.
  • Framework SUB class. The user provides handwritten code in the parent class of the framework code. An abstract method of handwritten code is overridden in framework code. For example, a framework entity field introduces parent class function calls for handwritten code, and highlights function calls.
  • Handwritten delegate class. The user provides additional writing code in the delegate class. For example, a framework entity calls a handwritten entity where it is specified, before or after setting the property value. In fact, it is a proxy server mode.
  • Framework delegate class. Users supplement handwritten code to obtain framework services. For example, a handwritten code entity calls a framework entity to set or get property values.

Figure 3-6 Runtime composition

  • Binding merges handwritten code and framework code during compilation, as shown in Figure 3-9 . It is a good way to take advantage of partial specs and compile-time merging during compilation. The Visual Basic and C# languages in Visual Studio 2005 are built during compile time.

Figure 3-7 SaaS operator relationship group

  • Assembly development

Important innovations in the field of platform-independent protocols are self-description, variable encapsulation, assembly through processes, and architecture-driven development.

  • Platform independent protocol

Web services technology succeeded, but earlier component assembly techniques to separate out specific and assembled components from implementation techniques failed. Since XML is a technology for managing information, not a technology for building components, Web services use encapsulation to map Web method calls to native method calls, based on the following component implementation technologies. While CORBA attempts to use a similar strategy, its complexity requires significant investment from platform vendors, which limits its scope of use. Simple XML-based protocols significantly reduce implementation difficulties, ensuring their universality. By encoding remote method invocation requests like XML, they avoid interoperability problems caused by platform-specific remote method invocation encoding and parameter aggregation. At the same time, they have designed the platform’s interoperability from the start by gaining broad industry standard acceptance.

  • self description

By improving component packaging to make inferences, dependencies, behaviors, resource consumption, performance, and proofs obvious, self-describing reduces schema mismatches. It provides metadata that can be used to automate component discovery, selection, licensing, acquisition, installation, tuning, assembly, testing, configuration, deployment, control, and management.

The most important form of self-describing is used to describe component inferences, dependencies, and behaviors, so developers can deduce interactions between components and tools can verify the assembly. The most widely used specification sheets in object orientation are class and interface declarations. They define the behavior provided by the class, but only account for important inferences and dependencies by naming other classes and interfaces in method signatures. A contract is a rich specification. A contract manages the interaction between components. It doesn’t know when to call a component. A contract describes the sequence of interactions, and responses to protocol illegal and other unpredictable conditions.

Of course, contracts are useless unless they are forced. There are two ways to enforce the contract.

  • Assemble components without mismatching contracts
  • Use the information provided by the contract to provide adapters that enable direct interaction between components, or coordinate the interaction between them.

Garlan recommends using standard adaptation techniques recipes and tools that provide encapsulation and data conversion [Gar96]. One of the most promising adaptation strategies is to release partial components that can be accomplished during assembly by encapsulating aspects that provide the code required for assembly. This strategy, called variable encapsulation, is described below.

Another important aspect about self-description is proof. If a component can prove that it has only specified dependencies, consumes specified resources, has specific functional characteristics under certain conditions, or has some public weaknesses, then it can infer the functional characteristics of the software assembled from these components. and operating characteristics. This has been studied at Carnegie Mellon University’s School of Software Engineering, and it’s called Assured Predictable Assembly of Components (PACC).

  • variable encapsulation

We have seen that static encapsulation reduces this possibility – a component can be used in a particular assembly by statically binding its functional aspects or intrinsic aspects that have no functional or contextual dependencies. Variable encapsulation reduces mismatches between architectures by publishing partially encapsulated components that enable adaptation to new contexts by leveraging their functional aspects to select and codify appropriate non-functional aspects, as shown in Figure 3-8 Show. The form of a component in a particular assembly can be determined by the context of its location. Flexibility can be improved by making component boundaries more resilient and reducing mismatches between architectures. By removing non-functional assumptions, functional parts can be reworked on component boundaries. Effective adjustments can be pre-identified, and in some cases even automated by tools.

Figure 3-8 Variable Encapsulation

Variable encapsulation is an Override of Aspect Oriented Programming (AOP). AOP is a method in which different aspects of a system are separated and then combined [KLM97]. Variable encapsulation differs from AOP in three ways.

  • Variable encapsulation codifies the encapsulation aspect, whereas AOP, as a common practice, codifies non-encapsulated lines of code. On the non-packaged side, the same problems arise when assembling a poorly assembled component package, called architecture mismatch and unpredictability. Indeed, aspect-based sourcing is more prone to these problems than component assembly, since components have at least descriptive behavior and some wrappers that prevent no dependencies. AOP’s lack of packaging makes it difficult for developers to infer aspect compatibility and the functional characteristics of the code, or the implementation result characteristics, making it almost impossible to check the aspect code with tools.
  • In terms of AOP programming during component development, variable encapsulation is programmed later than them, such as during component assembly or configuration. This is important because the context a component may be placed into is not known until the component is published. In fact, in order to support assembly development, as described in the article, the third part must be able to predictably assemble and deploy dependency-free development components. This requires a formal way to separate the aspect, the encapsulation aspect, the specification aspect and the packaging aspect. Variable encapsulation can also be progressive, which can happen in different stages. For example, we can bind some aspects during assembly, some during development, and some during runtime.
  • Variable encapsulation is architecture driven, whereas AOP is not. These aspects separated from the functional core must be explicitly defined through interfaces, abstract classes, WSDL files, or other forms of contracts.
  • Process management assembly

If there are sufficient contract mechanisms, services can manage the order of information exchanged between them through a process management engine, such as Microsoft BizTalk Server, as shown in Figure 3-9 . Process management assembly makes assembly development easier because there are far fewer dependencies between services than binary components. Unlike classes, they do not necessarily reside in the same implementation. Unlike components, which require platform-specific protocols, they can be assembled across platform boundaries. Two services can interact with each other if the contracts between them are compatible. They can be developed and deployed separately, and then assembled through process management. They can even reside in different administrative and organizational areas if appropriate interception heavy-duty services are available. In other words, process management assembly eliminates design, compilation, and deployment time dependencies between different components.

Figure 3-9 Process management assembly

Process management assembly is essentially an arbitration, as described by Gamma’s arbitration pattern. An interaction flow between quorum management components. A quorum has powerful properties. One of these functions is to filter or translate information when components interact. Another function is to control interactions, maintaining state through multiple calls if necessary. This allows the quorum to infer interactions and change them if necessary through conditional logic. Quorum can also perform useful functions such as logging, enforcing security policies, and linking between different technologies or different versions of the same technology. An arbitration can also be a functional part of an assembly, enforce business rules or perform a business function, such as concluding a business transaction.

  • Architecture Driven Development

When it is better to prevent assembling mismatched components than to build illegal assemblies, then there is no need to improve the availability of well-matched components. That’s the goal of the architecture. According to Shaw and Garlan, a software architecture describes the assembly of components, their interactions and acceptable patterns of composition, reducing the risk of well-designed architecture mismatches and constraining design decisions.

Of course, developing software architecture is challenging. This makes it take many architects many years to become proficient in a limited architectural style or application domain. Assembly development cannot be achieved on an industrial scale without significant advances in architectural practice and more trust in software architecture.

These are the goals of Architecture-Driven Software Development (ADD), including:

  • A standard for describing, paraphrasing, and using schemas.
  • A method for predicting the utility of design decisions.
  • A pattern or architectural style, which is used to organize design expertise and help designers develop component-split representation patterns.

An architectural style is a rough model that provides a set of family systems for abstraction frameworks. It defines a set of rules that specify the different kinds of components that can be used to assemble a system, and the relationships of the different kinds of components can be used in assembly, in the constraints of assembly, and in the assumption of assembly. For example, a web service component style can be used to specify the port provided by the component. These components are defined by the Web service, and the connection is established through the connection port, which can only be connected if the two ports are compatible, and SOAP is used for communication through HTTP. Other architectural styles include: data flow, layered and MVC style. An architectural style promotes partitioning and improves design reuse by providing solutions to frequently occurring problems, and also promotes the following.

  • Reuse is achieved by identifying common architectural elements that are shared by the system based on this style.
  • Indicates clarity by defining a standard framework.
  • Improve interoperability by defining standard communication mechanisms.
  • Improve visualization by defining standard notation.
  • Improve tool development by defining enforced constraints.
  • Analyze by identifying salient features of the system based on this style.

An architecture description is a document that defines the software architecture. IEEE Standard 1471, which is recommended for describing intensive software architectures, provides guidelines for describing architectures [IEEE1471]. According to these guidelines, a system has one or more shareholders. A shareholder has special concerns and interests regarding certain aspects of the system. To be useful to shareholders, an architectural description must require a form and structure that is understandable to shareholders. ADS is a template used to describe the architecture of a system family. A formal scenario defines a view that can describe a part of a software product; it also provides a pattern for making the description, defining the scope, target and audience, the convention language, and the methods used to develop it .

These prominent elements used to detail a scene include:

  • An identifier and other introductory information (eg, author, date, references, etc.).
  • Stakes with the scene.
  • The conventions, languages, and methods used to generate views based on the scene.
  • Confirm the consistency of the view and complete the test.

A view describes a software product from a given scenario. A view is semantically close, meaning that it describes a software product from that context. A view contains one or more artifacts, each developed according to the scenario requirements. A view is an instance of a scene, and the view must be consistent with the scene for better shaping. A view follows a web design scenario, for example, it should describe the web layout of a particular software product, and it should describe the web layout using notation defined by the scenario. These prominent elements used to detail a view include:

  • An identifier and other introductory information (eg, author, date, references, etc.).
  • The scene identifier that the view follows.
  • A description of a software product constructed using custom, language, and scenario-defined methods.

To understand the difference between a view and its context, consider a logical database design for a business application. logical data

A library design is a view of an application, or more precisely, a view of the constituent components. The application aspect and the language used to describe it are specified in the logical database design scenario. Many different business applications can be specified, by using the same scenario, to generate different views, each view describing a logical database for some business application. These views can describe the same aspects, in the same language, but they will have different content, so each content describes a different application. An assembly view can decompose views of individual components from the same scene.

According to IEEE 1471, an architectural description must identify the scenarios used and the rationale for using these scenarios. An ADS as a specific target can be defined by enumerating the set of scenarios in which it is used. For example, an ADS for a consumer-to-business web application may require a scenario for the layout of the web pages and a scenario for the layout of the business data. Every view in the schema description must follow a scenario defined by ADS.

  • process framework

The key to process maturity is maintaining flexibility as the complexity increases with project size, geographic distribution, or time. Experience tells us that few structures increase flexibility by reducing the amount of work required. This principle can be applied across a family of software products by using a process framework to manage complex products without reducing flexibility.

Some difficulties with formal processes are that they are too abstract. The guidance they provide is obvious to experienced programmers, but less specific and sufficient for beginners. In order to add value in use, it must reduce the details of the current project, because each project is unique in many ways, and it is impossible for us to produce a process that satisfies all projects. We know how to solve such problems and we can customize and tailor a formal process for a particular product family. If there is no professional supplier, the above things cannot be successful in the market. Some vendors usually add something useful from other processes such as XP in order to customize the process for a particular user. Others, especially system integrators and ISVs, tailor the process to suit a particular product or consultative practice. Either way, the key to using any process efficiently is to make it highly specialized for a given project so that it contains only immediately available resources. The changes resulting from this customization are very complex and produce results that are rarely similar to the original process.

A highly centralized process includes detailed project information such as tool configuration, network share paths, developers working with instructions, API documentation, names of key contacts for key processes like CM, bug tracking and handling, and check-in Group strategy, programming style, and peer checks, among other details about projects and project teams. Along with other forms of system reuse, this customization is only useful if we can use it more than once. Also, reusing a highly concentrated process resource increases its flexibility by eliminating work, as do other reused resources. As Jacobson always said, the fastest way to build something is to reuse something that already exists, especially reusable resources that can be customized and extended. Many things can be reused systematically, as can the development process.

A process framework is decomposed into smaller processes, which are attached with ADS scenarios. Each small procedure describes the need to generate a view. It can enumerate the key decision points, identify the transformation links of each decision point, describe the necessary and optional activities, and describe the resources required for each activity and the products produced. Each artifact has some constraints before it is processed, and some post-conditions, the invariant environment required for the artifact to stabilize. For example, we need to get the loop condition before the loop starts and the end condition when it exits. We need all code to be built and tested correctly. We call this architecture a process framework because it defines the space in which processes may be incorporated, depending on the needs and environment of a given project, without necessarily describing a process for all projects. When a process framework is defined, small processes can be combined into any workflow required by the project, including top-down, bottom-up, inside-out, test coding and coding testing, any combination or mix of flows .

These workflows can be driven by resources, allowing optimization via PERT and CPM to start workflows when resources are available. Many kinds of resources can drive planning, including requirements and source code, developers and program managers, configuration management products or defect tracking systems, like opening a port on a server or allocating memory to a device. This is called constraint-based planning. Constraint-based planning leverages a small number of architectural requirements, balancing the need for flexibility. Constraint-based planning provides guidance, adding constraints on development artifacts rather than prescribing a process. The generation of flexibility can be obtained by dynamically generating a workflow under constraints, adapting to a large number of environmental variables, while summarizing learning experience and reducing the cost and time of knowledge rediscovery.

A process framework is not necessarily too large or too thin, and may contain more or less of the required details. This provides a way to measure the size of the process, depending on the environment. For example, a small and flexible group can use a small framework that provides only some of the main key practices, such as XP. A large organization can add many details of the build process, inspection process, test process or component sharing rules.

System Architecture Design

The system architecture in software development determines the stability, robustness, scalability, compatibility and availability of a system, and it is the soul of the system. Architecture is at the heart of the architect’s concern. A good architecture is the beginning of the success of the system, otherwise, no matter how good the code and design are, it will not help.

Introduction to the main development frameworks of .net

  • Castle

Castle is an open source project for the .NET platform. From the data access framework ORM to the IOC container, to the MVC framework and AOP of the WEB layer, it basically includes everything in the entire development process to quickly build enterprise-level applications for us The program provides a good service. The key technologies are ActiveRecord, Facilities, MonoRail and so on .

Advantages: It embodies the idea of ORM, IOC, ActiveRecorder, and MVC framework.

Disadvantage: The division of the framework level is not very clear.

  • PetShop

PetShop is used by Microsoft to demonstrate the capabilities of .Net enterprise system development. PetShop4.0, this example is released by Microsoft for SQL Server 2005 and Visual Studio 2005. Some new technologies are used in it. Cached data is synchronized with database updates, new Web controls, and master applications, asynchronous communication, and message queues. These are very useful techniques. The abstract factory pattern is widely used in PetShop. Due to the use of Master Pages, Membership, and Profile, the coding amount of the presentation layer is reduced by 25%, and the coding amount of the data layer is reduced by 36%.

Figure 3-10 Architecture of PetShop4.0

In the data access layer (DAL) of PetShop4.0, DAL Interface is used to abstract the data access logic, and DAL Factory is used as the factory module of the data access layer object. For the DAL Interface, there are specific implementations of SQL Server DAL that supports MS-SQL and Oracle DAL that supports Oracle. The Model module contains data entity objects. It can be seen that in the data access layer, the idea of “interface-oriented programming” is completely adopted. The abstracted IDAL module is separated from the dependency with the specific database, so that the entire data access layer is conducive to database migration. The DALFactory module specifically manages the creation of DAL objects for easy access to the business logic layer. Both the SQLServerDAL and OracleDAL modules implement the interface of the IDAL module, and the logic contained in them is the Select, Insert, Update and Delete operations on the database. Because the database types are different, the operations on the database are also different, and the code will be different accordingly.

In addition, the abstracted IDAL module, in addition to releasing downward dependencies, also has only weak dependencies for the business logic layer above it.

Advantages: It embodies the factory mode ORM, IOC idea, and .Net enterprise -level development.

Insufficient: no ORM idea.

  • N hibernate _

Hibernate is the most widely used open source object-relational mapping framework. It encapsulates Java’s JDBC (similar to ADO.Net) with very lightweight objects, allowing programmers to use object programming thinking as they wish. To manipulate the database, it has become quite popular in the domestic Java development circle. NHibernate, like NUnit and NAnt, is based on .Net’s Hibernate implementation. It mainly embodies the idea of ORM , solves the problem of persistence layer in layered development, and is very important in N-layer development .

Advantages: reflects ORM, persistence layer .

Disadvantages: The configuration is complex, and it relies too much on XML files.

Summary of techniques used:

OR Mapping idea, layered architecture idea, Castle-ActiveRecorder, Atlas, reflection, design pattern (singleton pattern, simple factory pattern, strategy pattern), XML, IOC, framework.

Introduction to the current main development framework of J2ee

  • Struts frame

The Struts framework is an open source product for developing Web applications based on the Model-View-Controller (MVC) design paradigm. It uses and extends the Java Servlet API and was originally created by Craig McClanahan. In May 2000, it was donated to the Apache Foundation. The Struts framework presents a powerful library of custom tags, tiling, form validation and I18N (internationalization). In addition, Struts supports many description layers, including JSP , XML/XSLT allows Java programmers to use object programming thinking to manipulate databases, JavaServerFaces (JSF) and Velocity ; also supports some model layers, including JavaBeans and EJB.

The following is the core content of Struts :

JSP(TagLib)——>ActionForm——>Action——>Event——>EJBAction——>EJB——>DAO——>Database

JSP (TagLib) (forward) <——Action<——EventResponse<——

Advantages: Based on MVC pattern, well structured, based on JSP .

Disadvantages: The scalability is not very good, it is not suitable for large-scale projects with complex logic, and the framework hierarchy is not very clear.

  • Spring Framework

The Spring Framework is a layered Java/J2EE application framework based on code designed and distributed by Expert One-on-One J2EE. The Spring Framework provides a simple development technique for automating a large number of property files and helper classes in a project.

Spring is an open source framework created by Rod Johnson and described in his book “J2EE Design and Development Programming Guide”. It was created to address the complexities of enterprise application development. Spring makes it possible to use basic JavaBeans to do things that were previously only possible with EJBs. However, Spring’s uses are not limited to server-side development. Any Java application can benefit from Spring in terms of simplicity, testability, and loose coupling.

The main features included in the Spring Framework are :

1 Powerful JavaBeans-based configuration management, using the Inversion-of-Control (IoC) principle.
2 A core bean factory that can be used in any environment, from applets to J2EE containers.
3 The general abstraction layer is suitable for database transaction management , allows pluggable transaction managers, and can easily demarcate the boundaries of each transaction without dealing with low-level problems.
4 A meaningful JDBC abstraction layer for exception handling.
5 Integrated with Hibernate, DAO implementation support and transaction strategy.

Advantages: It embodies the ideas of J2EE , container, lightweight , inversion of control, and aspect-oriented .

Disadvantages: The structure is complex and difficult to understand.

  • Hibernate framework

Hibernate is an open-source Object Relational Mapping (ORM) framework that provides a very lightweight object encapsulation for JDBC. It provides an easy-to-use framework to map an object-oriented domain model to a traditional relational The database allows Java programmers to use object programming thinking to manipulate the database at will . Not only is it responsible for mapping from Java classes to database tables (and SQL data types from Java data types) , but it also provides data query and retrieval capabilities, and can greatly reduce development time spent on manual data processing in SQL and JDBC. The most revolutionary thing is that Hibernate can replace CMP in the J2EE architecture applying EJB to complete the heavy task of data persistence.

The goal of Hibernate is to ease the developer’s programming tasks associated with the persistence of large amounts of common data. Hibernate is also able to adapt to the development process, whether it is a new design or from an off-the-shelf database. Hibernate can automatically generate SQL, freeing developers from the tedious task of manually processing result sets and converting objects, and enabling applications to be ported to all SQL databases. It also provides transparent persistence. The only requirement for persistence classes is to implement a parameterless constructor.

Advantages: It is mainly used in the EJB layer , which is highly configurable and flexible, and simplifies database operations.

Disadvantage: Difficult to configure.

Common software architecture

  • Three-tier architecture

In software architecture design, the layered structure is the most common and the most important one. The hierarchical structure is generally divided into three layers, from bottom to top: data access layer, business logic layer (or domain layer), and presentation layer, as shown in the figure:

Figure 3-11 Three-tier architecture

Data access layer: sometimes called the persistence layer, its function is mainly responsible for database access. In short, it implements the operations of Select, Insert, Update, and Delete on the data table. If you want to add elements of ORM, it will include the mapping between objects and data tables, and the persistence of object entities.

Business logic layer ( BusinessRules ): It is the core of the whole system, which is related to the business (domain) of this system. Taking the STS system as an example, the relevant design of the business logic layer is related to the logic of sales tracking . Structurally it encapsulates the related operations of the data access layer. This layer is mainly composed of classes that implement specific business logic.

Presentation layer (WebUI) : It is the UI part of the system and is responsible for the interaction between users and the entire system. In this layer, ideally, the business logic of the system should not be included. Logic code in the presentation layer, only related to interface elements. In the current project , it is designed using ASP.NET , so it contains many Web controls and related logic .

  • Five-tier architecture

SaaS software architecture can also be divided into five layers , from top to bottom : user interface layer (presentation layer), business logic layer , general layer, application framework layer, remote access (WebService) layer, data access layer, as shown in the figure shown:

Figure 3-12 Microsoft-based .NET architecture design

User Interface Layer (UI)

The user interface layer is the interface that the user operates directly. This layer consists of interface appearance, form controls, frame and other parts. The user interface layer is responsible for the user’s interaction with the entire system. In this layer, ideally, the business logic of the system should not be included. Logic code in the presentation layer, only related to interface elements. In the current project , it is designed using ASP.NET , so it contains many Web controls and related logic .

    • interface appearance includes skip (skin), Images (pictures), css (style sheets)
    • Form controls mainly include common forms and user-defined controls.
    • The framework mainly includes Master Page and Frame Page.
    • Others mainly include JavaScript files, Dll files, Report reports, Schema database creation, and Model development templates.

Business logic layer ( BusinessRules )

It is the core of the whole system, and it is related to the business (field) of this system. Taking the STS system as an example, the relevant design of the business logic layer is related to the logic of sales tracking . Structurally it encapsulates the related operations of the data access layer. This layer is mainly composed of classes that implement specific business logic.

    • BLFactory business logic factory
    • IBL business logic interface
    • BusinessRules business logic implementation

General layer

The common layer runs through the presentation layer and business logic layer of the entire project. It mainly stores the more general constant definitions and general services (Service) in the project. The Service here refers to the general methods in the business logic of the current project. We write them in the corresponding static classes. Provided as a service.

CommonLayer : Stores common constants and methods .

data access layer

This layer structure is the most complex and mainly consists of the following layers: data access factory layer ( DALFactory ), data access interface layer (IDAL), custom query layer ( PersistenceFacade ), temporary layer ( DataAccessLayer ), data persistence layer ( PersistenceLayer ) .

The following is from bottom to top:

    • PersistenceLayer layer, which is the bottom layer of the framework design (except the application framework layer). It is mainly responsible for objectifying the physical database with ORM ideas. Simply put, it is to map database tables to entity classes, and map corresponding fields to class attributes. In this way, the physical database is completely transparent to developers, and we completely get rid of the physical database by applying the ORM idea. And independent of the specific implementation of the database.
    • Specifically, we apply the implementation of the lightweight data access component ActiveRecorder under the well-known open source project Castle.
    • PersistenceFacade layer and IDAL, where all query methods used in the project are defined. Corresponds to the data entity defined by the PersistenceLayer layer . In the query class defined by these words, any combination of the three query methods provided by ActiveRecorder (the simple interface provided by ActiveerRecorderBase, the simple query SimpleQuery, and the custom query CustomerQuery) can be applied. And each class here must implement the relevant interface defined by the IDAL interface layer.
    • The DALFactory layer, as the factory for data access, invokes the relevant operations in the data access components composed of IDAL and PersistenceFacade through the reflection mechanism of .Net.
    • DataAccessLayer temporary layer. First declare that this layer is completely unnecessary. Because we can not write any Sql statement in the project. All Sql is replaced with Hql. The purpose of designing this layer is to allow the technical transition of the people in the project team. This layer can operate the database through Sql (not recommended). This layer will no longer be provided after the architecture is stable.

Application framework layer ( Framework )

The purpose of this layer is technical precipitation. Move the common things between projects into the application framework layer to achieve the purpose of code reuse. This layer can be black boxed later. Common components can be included.

    • Framework : Accumulate some methods and controls that can be abstracted
    • MSMQMessag : Implementation of message processing queue
    • Pager: general page turning class
    • Report: general report class
    • Controls : Control handling class
    • DataFormat: Data format conversion class
    • WebUI: page processing class
    • Validate : data validation
    • Object: conversion and access between objects

The benefits of a layered architecture

1. Developers can only focus on one of the layers in the entire structure;

2. It is easy to replace the implementation of the original level with a new implementation;

3. It can reduce the dependency between layers;

4. Conducive to standardization;

5. It is beneficial to the multiplexing of logic at each layer.

In a nutshell, layered design can achieve the following goals: decentralized attention, loose coupling, logic reuse, and standard definition.

A good hierarchical structure can make the division of labor among developers more clear. Once the interfaces between the layers are defined, the developers responsible for the different logic designs can spread their attention and work hand in hand. For example, UI personnel only need to consider the user interface experience and operation, domain designers can only focus on the design of business logic, and database designers do not have to worry about tedious user interaction. The task of each developer is confirmed, and the development progress can be rapidly improved.

The benefits of loose coupling are obvious. If a system is not layered, then their respective logics are tightly intertwined, interdependent on each other, and no one can be replaced. Once a change occurs, it will affect the whole body, and the impact on the project is extremely serious. Reducing the dependencies between layers can not only ensure future scalability, but also has obvious advantages in reusability. Once a unified interface is defined for each functional module, it can be called by each module without repeated development for the same function.

For a good hierarchical structure design, standards are also essential. Only with a certain degree of standardization can the system be scalable and replaceable. The communication between the layers must also ensure the standardization of the interface.

“No gold is bare, no one is perfect”, and the layered structure inevitably has some defects:

1. Reduce the performance of the system. This goes without saying. If the hierarchical structure is not adopted, many businesses can directly access the database to obtain the corresponding data, but now it must be done through the middle layer.

2. Sometimes leads to cascading modifications. This modification is especially reflected in the top-down orientation. If a function needs to be added in the presentation layer, in order to ensure that its design conforms to the hierarchical structure, it may be necessary to add corresponding codes in the corresponding business logic layer and data access layer.

Software Architecture View

Philippe Kruchten writes in his book “Introduction to the Rational Unified Process”:

An architectural view is a simplified description of a system as seen from a certain perspective or point, covering a particular aspect of the system and omitting entities that are not related to this aspect.

That is to say, there are too many contents and decisions to be covered by the architecture, which is beyond the ability of the human brain to “make it overnight”. Therefore, the “divide and conquer” approach is adopted to design from different perspectives; Archiving provides convenience.

Figure 3-13 The 4+1 view method proposed by Philippe Kruchten

Different architectural views of the approach carry different architectural design decisions, supporting different goals and uses:

  • Logical View: When an object-oriented design approach is adopted, the logical view is the object model.
  • Development view: Describes the static organization of software in a development environment.
  • Processing Views: Describes the design of the concurrency and synchronization aspects of the system.
  • Physical View: Describes how software maps to hardware, reflecting the design of the system in terms of distribution.

Figure 3-14 Architectural design for different requirements using the 4+1 view method

logical view. The logical view focuses on functions, including not only user-visible functions, but also “auxiliary function modules” that must be provided to implement user functions; they may be logical layers, function modules, etc.

Development view. The development view focuses on packages, including not only source programs to be written, but also third-party SDKs and ready-made frameworks, class libraries that can be used directly, and system software or middleware on which the developed system will run. There may be a certain mapping relationship between the development view and the logical view: for example, the logical layer is generally mapped to multiple packages.

Process the view. The processing view focuses on runtime concepts such as processes, threads, objects, and related issues of concurrency, synchronization, and communication. The relationship between the processing view and the development view: The development view generally focuses on the static dependencies of the package at compile time , and these programs will be expressed as objects, threads, and processes after running. The processing view is more concerned about these runtime units. interaction problems.

physical view. The physical view focuses on how “the target program and its dependent runtime libraries and system software” are ultimately installed or deployed to physical machines, and how machines and networks are deployed to meet the reliability and scalability requirements of the software system. The relationship between the physical view and the processing view: The processing view pays special attention to the dynamic execution of the target program, while the physical view pays attention to the static location of the target program; the physical view is an architectural view that comprehensively considers the interaction between the software system and the entire IT system.

product development model

Product development mode is the focus of corporate strategy. The product development route determines a series of management methods and team building issues. It is also the organization strategy and management idea of the enterprise. The product development model runs through the entire product life cycle, from market research, project establishment, demand analysis, design, detailed design, development, testing, release, maintenance and other traditional software engineering ideas to the current popular IPD, market-oriented. Business models are all changing the traditional R&D model. The new idea with service experience as the core is the essence of the SaaS model. We do not develop for product research and development, we must develop for market value development.

Several Mainstream Product Development Models

  • Functional development with project management

This is the product development model usually adopted by enterprises. The general manager or the marketing department determines the new product idea and decides whether to initiate a project. The R&D/technical department is responsible for design, development, testing, and forming product prototypes or service plans, which are then transferred to the production department for batch manufacturing. The department is responsible for sales, and the customer service department provides after-sales service. Each functional department is only responsible for a certain stage of new product development, and formulates the business operation process of the department. Although there are project managers or formal project managers and product managers, they are not responsible for the final market success of the product.

Under such a management system, the focus is on the vertical management of various departments, and the lack of management of the horizontal relationship of profit model, product concept, research, production, supply and sales, which makes the product development process lack of attention, and few people comprehensively Looking at the market value, product strategy, development method and marketing mix of the product, they often make new product development decisions without seeing the whole picture. The heads of functional departments only care about how to smoothly deliver the product to the next link. Often complaining about the quality of work in the previous link, the company’s top management has to do a lot of coordination, communication and decision-making. When an enterprise develops to a certain scale, especially when multiple products are being developed at the same time, the general manager will often focus on one thing and another and be busy “fighting fire”, making decisions on the details of product design and internal management.

Figure 3-15 Functional development with project management

  • PACE: Product and Cycle Optimization Approach

PACE (Product And Cycle-time Excellence, product and cycle optimization method) was proposed by the American management consulting company PRTM in 1986, and used by PRTM to guide the improvement of the product development process of enterprises, it provides a complete general framework, elements and standard terminology.

1. The basic idea of PACE

(1) Product development is driven by the decision-making process, a process that can be managed and improved, not just by genius and luck.

(2) The product development process needs to be defined and implemented to ensure that all relevant personnel of the enterprise have a common understanding and know how to coordinate and cooperate.

(3) Product development is a structured process with four levels and a three-level schedule, which needs to be incorporated into a logical process framework. It is believed that problems must be solved through comprehensive methods, and isolated and scattered improvement methods are not desirable.

(4) Each stage of the evolution of the four processes needs to be done step by step. It is meaningless to prematurely introduce an element of the next stage into the current stage, just like adding a turbocharger to a bicycle. Contributes to the increase in speed, but increases the weight.

(5) Product development needs to be managed in a public decision-making process, and the management focus of top management is the key to decision-making and balancing the development process.

(6) The product development project team and the senior management need to establish a new organizational model (core group method). The product development team should have an authorized product manager and several cross-functional members, and the senior management should be turned into a product approval/management committee.

(7) Emphasize that design methods and automated development tools must have supporting infrastructure in order to be effective, and the improvement of product development processes cannot rely on design methods and automated development tools known as “silver bullets”.

2. Representative works of PACE

In the book “PACE-Product And Cycle-time Excellence” written by Michael E. McGolas, the founder of PRTM, the theory and knowledge system of PACE are comprehensively and systematically introduced.

Michael E. McGrath, one of the founders of PACE, also believes that product development is the main battlefield of business in the 21st century, and the future will be the “era of R&D productivity”, that is, new products can be developed in batches, and the company will pay more attention to new product development resource management, project management, technology management and product strategy.

3. The main core content of PACE

PACE believes that product development should focus on seven core elements, including stage review and decision-making, establishing a cross-functional core team, adopting a structured development process, using a variety of development tools and technologies, in addition to establishing a product strategy, conducting technology management, and aligning Pipeline management for the input of multiple products and resources.

  • IPD: Integrated Product Development

IPD (Integrated Product Development, integrated product development), its idea comes from PACE, on this basis, Motorola, DuPont, Boeing and other companies continue to improve and perfect in practice, created by IBM in learning, practice, and successfully Assisted Huawei in implementing the system. The IPD integrated product development process can be summarized as “a structured process, two types of cross-departmental teams, three system framework sets, four major decision review points, five core concepts, six important stages, seven related elements and eight Positioning tools”, the core idea of which is process reengineering and product reengineering.

Figure 3-16 IPD development mode

  • SGS: Gate Management System

SGS (Stage-Gate? System) gate management system, founded by Robert G. Cooper in the 1980s, is used in companies in the United States, Europe and Japan to guide new product development. (Cooper: Long-term commitment to product innovation (development) management research, especially empirical research, he believes that through extensive investigation and statistical analysis, product innovation (development) laws can be found, many of its empirical research reports have become theoretical and business circles. Important basis for new product success or failure analysis)

1. The basic idea of SGS:

(1) Make the project right – listen to the opinions of consumers, do the necessary preparatory work, and use a cross-functional work team

(2) Do the right project – carry out strict project screening and portfolio management

2. Representative works of SGS:

In his book “Winning at New Products: Accelerating the Process from Idea to Launch”, Professor Cooper provides a detailed introduction to various aspects of gate management systems and provides extensive research findings.

3. The main core content of SGS

The new product development process—gateway management process, its model is as follows:

Figure 3-17 SGS development mode

SGS pays great attention to effective entry decision-making and combination management, and makes life/kill decisions at each stage of product development to prevent waste of more resources from worthless products. In addition, multiple products need to be prioritized. Give full play to the combined advantages of enterprise resources.

SGS also emphasizes the marketing work before putting it on the market. The value of the product is ultimately realized through marketing. Therefore, how to market should be considered from the initial stage of development. Before the completion of development, complete market analysis, formulate product goals, position core strategies and improve marketing. Program.

SGS recommends that enterprises formulate product innovation strategies. For enterprises, sustainable competitiveness is reflected in the continuous introduction of successful new products. The formulation of visionary product innovation strategies and product planning will help the development and decision-making of each new product.

  • PVM: Product Value Management Model

The idea of product value management (PVM) is based on the profit model, D. Lehmann and Crawford’s “Product Management”, and the SGS gate management system. Adopted by many small and medium-sized enterprises and world-renowned brand enterprises, PVM introduces the profit model and its design method in detail, focusing on customers, needs and markets, and taking competition and profit as the guide, from corporate vision, strategy implementation to product planning, focusing on products Management and product life cycle axis, discussing the whole process of new products from conception to commercialization, emphasizing value chain and value stream analysis based on business model, rational strategy and strict evaluation procedures are reliable guarantees for product innovation (development). .

1. The basic idea of PVM:

(1) Do the right thing – strategy determines direction, mode determines performance, emphasizes product planning and product management

(2) Do things right – process decision method, focus on product demand analysis, product planning, technology development and marketing mix management

(3) Doing the right thing correctly – ability determines success or failure, and believes that project management is the guarantee of success

2. The main core content of PVM:

(1) PVM attaches great importance to profit model and value chain analysis, and believes that “success is based on an excellent organization, and excellence comes from an extraordinary profit model”. Emphasizes product planning and product management, and raises the research focus from the specific product development level to the product value and strategy level.

(2) PVM also believes that effective product development process entry management and decision review are needed, and the product development process and market management process are organically integrated to reduce the waste of limited enterprise resources by worthless products.

(3) PVM highlights the coordination of product demand analysis, product concept and marketing mix in order to realize customer value and give full play to the combined advantages of enterprise resources.

(4) PVM emphasizes the core role of project management in product development, and advocates the implementation of product manager system for product management.

(5) PVM focuses on technology development platform construction, core technology development and cost value engineering, and believes that a systematic way of thinking is the correct way to improve R&D performance rather than KPI+BSC.

(6) PVM also believes that the enterprise is the core competitiveness of management, and advocates the R&D strategic alliance. The competition between enterprises will turn to the competition of product management.

Product Development and Technology Development

  • The difference between product development and technology development

The most important thing in product development is: focus on the needs of customers, and realize this demand quickly and at a low cost with technology or skills, which are not necessarily all created by oneself. Product development is market-driven , and product development cannot fail.

Technology development is a personalized creation process: in the early stage of product development, we often conduct market research first, then make technical predictions, and then make product plans for product development. This is a typical technology-driven development process. Focusing on technology and principles, is a creative process whose risks and cycles are unpredictable, and technology development is allowed to fail.

Product development and technology development are mutual input and output.

For a start-up enterprise, the capital investment is insufficient, and the survival of the enterprise is the first at this time. In order to survive, it is flexible to choose whether to accept product development or technology development. For an enterprise that intends to develop for a long time, it should try its best to develop technology. We should focus on product prediction, new technology development, and strive to make products that are ahead of the industry. If it can lead the market, it will bring more profits, so that the enterprise can achieve sustainable development.

  • Three Era of Product Development
  1. product era

The Product Era of Technology-Based Product Launch: Product-Centric

Traditional output process: start with resources and technology

Market environment: There is a shortage of products, and the era of seller’s market is based on product sales!

Applicable companies: mature products with universal service! In the era of fierce competition, technology must be irreplaceable and leading, and technology can form barriers.

Risk: After entering the era of competition, R&D will become a big cost pressure.

  1. The era of personalized service

Complete the era of personalized service that customizes products according to the individual needs of customers: customer-centric

Current and future output process: completely from the customer and the market, the best resources and technology outsourcing or leasing

Market environment: a customized buyer’s market!

Applicable products: companies with good channels and real system integration capabilities! The Marketing department is the company’s largest department, and technology and marketing and sales are olive-shaped.

Risks: Without technical and product management capabilities, it is possible that when developing products by yourself, all the profits will be lost.

  1. Marketing era

The Marketing Era of Combining Customer Needs with Existing Technology Platforms to Launch a Business: Profit-Centric

Present and future output process: start with customers and marketing, develop products based on shelf technology, separate product development from technology development

Market environment: technical shelves and product platforms have been initially established, and customer needs are diversified!

Applicable companies: semi-mature products with functional improvements or partial innovations or products with platforms. The Marketing department is very important, and the company’s R&D, sales and marketing relationship is a dumbbell-shaped structure.

Risk: If there is no special person to build the product platform and the demand cannot be controlled, a lot of repetitive development needs to be done, and the development cycle is uncontrollable, resulting in losses for the company.

product version

Product: Refers to the version delivered to the user. There are usually three definitions:

V version: refers to the platform version. R version: refers to the final product delivered to the user. M version: a customized version for specific customers based on the R version.

The difference between product (R) and product platform (V):

Table 3-1 Product version

 

Product(R)

Product Platform (V)

market range

Market segments

general market

development object

product package

Technology package

plan

business plan

R & D plan

release interval

short (month)

length (years)

object oriented

external customers

inside company

Build a SaaS product platform

Develop R products on the V platform

On the basis of the R product, the M version can be customized through the configurable and extensible SaaS.

V, R, M constitute the product development structure tree as shown in the figure:

Figure 3-21 VRM product tree structure

Research and practice show that the process is subordinate to the mode, and the mode determines the process; the mode is subordinate to the strategy, and the strategy determines the mode. They are also a typical collaborative supply chain relationship.

The business model exists in the whole process of production, operation and management of the enterprise, is related to the business performance of the enterprise, and supports the realization of the strategic goal of the enterprise. At present, in-depth research on the goals and methods of BMR (business model reorganization), the basic relationship between BMR and BPR (business process reorganization), and the real implementation are very necessary to promote enterprise management innovation and enhance enterprise competitiveness.

Below, we take product R&D as an example to analyze and discuss the goals and methods of product R&D model reorganization.

Status of product research and development: Authoritative data shows that 80% of the world’s research and development and 71% of technological innovation are created and owned by the world’s top 500 companies. The core technologies of many industries in my country still rely on foreign technologies. In 2004, the average R&D investment of China’s top 500 manufacturing enterprises was 190 million yuan, accounting for only 1.88% of the sales revenue of these enterprises.

At this stage, there are many problems in the product R&D system of Chinese enterprises, which seriously restrict the effective improvement of enterprise R&D capabilities and the rapid introduction of new products. Mainly manifested in the lack of strong R&D innovation awareness, insufficient R&D capabilities, inappropriate R&D strategies, unsound R&D institutions, few expert R&D talents, low R&D investment, long R&D cycle, high R&D cost, unreasonable R&D process and customer demand in most enterprises cannot be fully satisfied, etc. The core is that the enterprise independent innovation research and development system has not yet been established.

Target of product R&D model reorganization: Based on the analysis of the current situation, the goal of product R&D model reorganization is to quickly establish an independent innovation R&D system, improve R&D capabilities, shorten R&D cycle, and reduce R&D costs, so as to develop what customers really need and have independent knowledge New products with property rights and core technologies.

Product R&D model reorganization method: Based on the enterprise development strategy and reorganization goals, we first formulate the enterprise product R&D strategy. The core is to develop more products with independent intellectual property rights and core technologies by rapidly establishing an independent innovation R&D system of the enterprise, improving R&D capabilities.

The second is to develop a research and development strategy. That is to say, whether to adopt the strategy of self-developed R&D, domestic cooperative R&D, or domestic entrusted R&D, or to introduce foreign core technologies through joint venture projects and directly purchase foreign core technologies with foreign exchange, or to implement the “going out” strategy and directly acquire foreign companies. Strategies for acquiring core technologies and outstanding R&D personnel. The purpose is how to effectively track the world’s scientific and technological frontiers, acquire foreign core technologies, and rapidly improve R&D capabilities.

post-doctoral R&D workstations in China’s top 500 manufacturing companies . 和Create multi-level, multi-form R&D institutions.

The fourth is to establish a scientific employment mechanism to directly hire experts from home and abroad, especially professional leaders. At the same time, we must pay close attention to cultivating the R&D personnel of the enterprise, and form a reasonable echelon structure of R&D personnel as soon as possible.

The fifth is that the top 500 manufacturing enterprises in China should increase investment in research and development. Every year, strive to use an average of 3% of the main business sales revenue to invest in the establishment of the company’s independent innovation and research and development system.

The sixth is to establish a product collaborative research and development system based on information network. Integrate R&D technology, management technology and information technology to drive innovation in product R&D models and product design concepts.

The seventh is to speed up the design and implementation of all relevant processes and their supporting systems during the establishment of the independent innovation R&D system, so as to improve the role and efficiency of the process.

Build and accumulate your own development system

Complying with the regulations of the industry and having our own characteristics is our goal. Successful software companies have rich and reusable code components. A few lines of code may be insignificant in a single system, but once reusable across a large number of systems, they are valuable. Doing a single project is not necessarily profitable, but the cost of transforming into a new project with previous project experience and code is much less. Therefore, the software industry must establish its own knowledge base and accumulate it continuously, which will be an inexhaustible wealth.

Build a reusable knowledge base

  • Take advantage of development templates 

Using our own developed templates to assemble our general pages greatly reduces page design code and development code, and improves development efficiency.

This template includes page style control, common page-turning components, and common operation functions such as opening a page, deleting, adding, exiting, etc.

  • Control management 

We will always use the controls brought by Microsoft’s AjaxControlToolkit in the future . This group of controls basically includes all the controls we use. The main feature is that there is no refresh and good integration.

  • Common component management 

Component management is divided into components available in any project and components available in this project. These components are actually assemblies made up of various classes. We can refer to this DLL file for the compiled component.

Components available in any project are placed in the CommonLayer layer.

Unified management of common components: each method of common components is written in a standard format, and must provide the instance, parameters and return results of calling the method.

style design 

  • The role of Themes

Themes is another custom Web Site enhancement function of Asp.net v2.0. Its function is to set some properties of pages and controls. And these settings can be applied to the entire application, a single page or a single control.

A theme is a set of visual interface settings, including wallpapers, cursors, fonts, sounds, and icons. A theme is a collection of property settings that define the appearance of pages and controls, and then apply that appearance consistently across all pages in a web application, across an entire web application, or across all web applications on a server.

A theme consists of a set of elements: skins, cascading style sheets (CSS), images, and other resources. The theme will contain at least the appearance. Themes are defined in special directories on a website or web server.

  • Definition of Skin

Skin files have the file extension .skin and contain property settings for individual controls (for example, Button, Label, TextBox, or Calendar controls). Control appearance settings are similar to the control tags themselves, but contain only the properties you want to set as part of the theme. For example, here is the control appearance for the Button control:

Create a .skin file in the theme folder. A .skin file can contain one or more control skins for one or more control types. Skins can be defined in separate files for each control, or skins for all themes can be defined in one file.

There are two types of control skins – “default skin” and “named skin”:

When a theme is applied to a page, the default appearance is automatically applied to all controls of the same type. If the control skin has no SkinID property, it is the default skin. For example, if you create a default skin for the Calendar control, that control skin applies to all Calendar controls on pages that use this theme. (The default appearance is matched strictly by control type, so the Button control appearance applies to all Button controls, but not to LinkButton controls or controls derived from Button objects.)

A named skin is a control skin that has the SkinID property set. Named appearances are not automatically applied to controls by type. Instead, you should explicitly apply a named skin to a control by setting the control’s SkinID property. By creating named skins, you can set different appearances for different instances of the same control in your application.

  • Cascading Style Sheets

Themes can also contain cascading style sheets (.css files). When placing a .css file in the theme directory, the stylesheet is automatically applied as part of the theme. Define a stylesheet in the theme folder with the file extension .css.

  1. Create a new folder called App_Themes on the website.
  2. Note: The folder must be named App_Themes.
  3. Create a new subfolder of the App_Themes folder to hold the theme files. The name of this subfolder is the theme name. For example, to create a theme named BlueTheme, create a folder named \App_Themes\BlueTheme.
  4. Add the files that make up the theme’s skins, style sheets, and images to the new folder.
  • Create a look
  1. Create a new text file in the theme subfolder with the .skin extension.
  2. A typical convention is to create a .skin file for each control, such as Button.skin or Calendar.skin. However, you can create as many or as few .skin files as you want; a skin file can contain multiple skin definitions.
  3. In the .skin file, add the general control definition (using the declarative syntax), but only include the Properties to be set for the theme and not the ID attribute. The control definition must contain the runat=”server” attribute.
  4. Repeat steps 2 and 3 for each control skin you want to create.
  • Apply skins to controls

Skins defined in a theme apply to all control instances in an application or page to which the theme has been applied. In some cases, you may want to apply a specific set of properties to a single control. This can be achieved by creating a named skin (an item in the .skin file with the SkinID property set) and then applying it to individual controls by ID. For more information on creating named appearances, see How to: Define ASP.NET Themes.

Apply a named appearance to a control

Set the SkinID property of the control, as shown in the following example:

<asp:Calendar runat=”server” ID=”DatePicker” SkinID=”SmallCalendar” />

If the page theme does not include a control skin that matches the SkinID property, the control uses the default skin for that control type.

Configuration management

Using scientific configuration management ideas, supplemented by advanced configuration management tools, can easily solve the problems caused by management in the process of project development.

  1. List software configuration items required for each stage of software development, operation, and maintenance

The so-called software configuration items are many information items obtained in the progress of software development, such as work products, stage products, and tools and software used. Table 3-2 lists several types of software configuration items and their generation stages.

Table 3-2 Software configuration items

Classification

stage

example

Environment class

Software development environment or software maintenance environment

Compilers, operating systems, editors, database management systems, development tools, project management tools, documentation tools

define class

Work products resulting from the requirements analysis and definition phase

Requirements Specification, Project Development Plan, Design Criteria or Design Guidelines, Acceptance Test Plan

Design class

The work product obtained after the design phase

System Design Specifications, Program Specifications, Database Design, Coding Standards, User Interface Standards, Test Standards, System Test Plans, User Manuals

coding class

Work product after coding and unit testing

Source code, object code, unit test data and unit test results

maintenance class

Work products generated after entering the maintenance phase

Any of the above software configuration items that need to be changed

Only by clarifying which software configuration items are available at each stage can software companies be confident and confident when implementing software configuration management.

  1. Classify and supplement existing software configuration items to further improve software configuration

When software companies implement a certain software, they have different needs for different users. Table 3-3 is the working environment of different users:

Table 3-3 Working Environment

user

computer configuration

operating system

Backend database system

User A

PIV1.4G

WIN2000

SQL Server2005

User B

PIV3.5G

WIN2000

Oracle9.0

In order to meet the usage requirements of individual users, our software products must take these differences into account. When designing the product, we try our best to make the arrangement shown in 3-4 :

Table 3-4 List arrangement

user

Configuration items (modules)

User A

A module, b module, c module, e module, h module

User B

A module, b module, c module, f module, g module

In order to realize these two different software configurations, in actual development and application, we can develop each configuration item separately, and then combine them into different products according to the needs of users, as shown in Figure 3-22 :

 

Figure 3-22 Combining different users into different products

  1. Effective control and management of changes to software projects

Software enterprises are bound to encounter software changes in the process of software development, operation and maintenance. There are two main factors that cause software changes: on the one hand, users, such as users requesting to modify the scope of work and requirements, etc.; design. For the above two situations, software companies can solve them from the following aspects:

Identify who will implement the change on both sides

It should be clear in advance that the user has the right to apply for change of requirements and the software enterprise project development team has the right to accept the change, and the number of both parties should be controlled. The advantage of doing this is that it can constrain the demand side, so that each requirement raised by the demand side must be carefully discussed. When the project development team receives the user’s requirement change, after discussion with the personnel who have the right to implement the change, it can take into account the overall situation and change the related documents, procedures and plans involved.

Strict review of changes

Not all changes need to be revised, and not all changes need to be revised immediately. The purpose of the review is to decide if and when changes are needed. For example, when it comes to interface style issues, you can leave it unmodified first, or plan the time for modification and optimize it later. In addition, the modification of core modules should be strictly checked, otherwise it will cause global problems.

Assess the impact of changes

Changes come at a cost, and you should evaluate the cost of changes and the impact on the project, let users understand the consequences of changes, and make judgments with users.

Let the customer confirm whether to accept the cost of the change. In the process of evaluating the cost and discussing with the customer, you can ask the user to make a judgment together: “I can modify it, but can you accept the consequences?”, and list the consequences of the modification to the user one by one.

4. Effective management of software versions

In order to adapt to different operating environments, different platforms, and different users’ requirements, the software products developed by software companies lead to the production or evolution of different versions of the same software. Software enterprises can implement software version control through the following two common methods.

Number version identifier

Expressed numerically, as in the first edition, expressed as V1.0. The second version is denoted as V2.0. It is generally considered that V1.0 and V2.0 are the basic version numbers, and V1.1 and V1.2 are the first and second revisions to the basic version V1.0. Obviously these revisions are minor revisions. If there are major changes or global important changes caused by multiple revisions, the version number should be increased, such as V2.0. The number version identification can be shown in Figure 2:

Figure 3-23 Number version identification

Symbol version designation

This version notation is to extract the important information of the version. For example, V1/VMS/DB SERVER represents a version of the database server running on the VMS operating system. For software enterprises, it can be represented by ” personnel management system stand-alone version ” , ” personnel management system network version ” and so on.

Implement effective configuration auditing

Software enterprises can carry out configuration auditing from the following two aspects:

” Configuration Management Activity Audit “

” Configuration management activity review ” is used to ensure that all configuration management activities of project team members follow approved software configuration management policies and procedures, such as frequency of check in/check out, work product maturity The principle of degree improvement , etc.

” Baseline Review “

To ensure the integrity and consistency of the baselined software work product, and to meet its functional requirements. The completeness of the baseline can be considered from the following aspects: Does the baseline library include all planned configuration items? Is the content of the configuration item itself in the baseline library complete? (For example, do the references or references mentioned in the documentation exist?) Also, for code, check against the code listing that all source files already exist in the baseline library. At the same time, compile all the source files to check whether the final product can be produced. Consistency mainly examines the consistent relationship between requirements and design, as well as design and code. Especially when changes occur, it is necessary to check whether all affected parts have been changed accordingly. Non-conformities found in the audit are to be recorded and tracked until resolved.

In practice, auditing is generally considered an after-the-fact activity that is easily overlooked. However , ” after the fact ” is also relative, and the problems found in the audit at the early stage of the project always have guidance and reference value for the later work of the project. In order to improve the effectiveness of the audit, a checklist should be fully prepared, as shown in Table 3-5 .

Table 3-5 Checklist

checklist

Yes

NO

illustrate

Whether to check in and check out in time

   

Whether to perform regular backups of the configuration repository

   

Whether to periodically check the configuration system for viruses

   

Whether the non-conformances from the last review have been resolved

   

Whether to conduct regular audit work

   

Whether to set up a configuration review team

   

6. Select the configuration tool

Software companies choose business configuration management tools, you can consider the following factors.

Tool market share

What everyone chooses is usually the better one. Moreover, a high market share usually indicates that the company’s operating conditions will be better, and it is less likely to be acquired or closed down.

Features of the tool itself

The tool itself has stability, ease of use, security, scalability, etc. You should try out and evaluate the tool carefully before investing. What is easier to ignore here is the scalability of the tool (Scalability). You may only deploy this tool in a team of a few people or a dozen people, but in the future, there may be dozens or hundreds of people who will rely on this tool to build. If it works on the platform of the company, can this tool provide such support capabilities at that time? If the time comes to change a tool, you will regret your choice today.

abstract object model

The abstract object model provides a business public platform for enterprise-level application systems, extracts the common business of government and enterprise applications, and forms a general business information system. Based on this level, it is used to construct, integrate and run government and enterprise information systems. , to reduce repetitive development during enterprise application development.

Based on a reconfigurable abstract object model, these classes contain both complete methods inherited and used by application developers, as well as abstract definitions of methods that may be implemented by developers of application business objects. Application developers can use this object model to build object-oriented applications and frameworks.

The abstract object model provides the following features:

  • Custom Business Object Properties
  • Variable business logic
  • Uniform Object Unique Identifier
  • Object Oriented Design Patterns
  • / filter scheme by object properties

model driven

MDA (Model Driven Architecture) is a model-driven architecture, which is a software development framework defined by OMG. It is a framework based on UML and other industry standards that supports the visualization, storage and exchange of software designs and models. Compared with UML, MDA can create machine-readable and highly abstract models, which are independent of implementation technology and stored in a standardized way. MDA uses the modeling language as a programming language rather than just a design language. The key to MDA is that models play a very important role in software development.

MDA derives from the well-known idea of separating the specification of system operation from the details of how the system utilizes the capabilities of the underlying platform. MDA provides a way (through related tools) to normalize a platform-independent system, normalize the platform, and The system selects a specific implementation platform and translates the system specification to the specific implementation platform. The three main goals of MDA are: portability, interoperability, and reusability through architectural separation.

Model-Driven Architecture (MDA) is a new technology system that OMG has been hyping up in recent years, and it is also a new hot spot for many researchers engaged in software modeling. The core idea of MDA (Model Driven) is to study the business model (such as enterprise informatization or solutions in the field of construction). Then, a relatively core domain model is extracted, and a PIM (Platform Independent Model) is abstracted. Afterwards, according to different development platforms (such as .net or J2EE), the application platform (windows or unix) forms the corresponding PSM (Platform Dependent Model). According to the corresponding tools, such as ArcStyler, the corresponding code and software system can be completely generated. Of course, here is just a general idea and method.

  1. The MDA theory is still in an exploratory period, many theories and methods are immature, and of course there is no mature tool. From the current trend, the theoretical and practical tools are far from the expectations proposed by the OMG organization. The distance, at least a few years away, will take shape.
  2. At present, both foreign open source organizations and some domestic organizations are only in the initial stage of MDA. Many people’s so-called application of MDA is actually just an initial exploration and attempt in the MDA system. For example, ORM realizes the exploration of MDA in database application at a certain level, but it only solves the problem of entity model mapping. A few days ago, an interviewer used ArcStyler4.X to make an application model of a bank POS system, and generated a little frame code that needs to be modified. Just tell me that he has mastered the MDA, and the level of his level really embarrassed me! admire!
  3. The first hot spot of MDA may be the bridge, and in the field of MDA, mapping is a very important point, and transformation and interaction are just extensions of this point.
  4. For now, the language most likely to be implemented in the MDA system is JAVA, although I hate some of JAVA’s stupid ways.
  5. The core of MDA is PIM, because it is the most abstract and synergistic. At the same time, in terms of the current situation, PIM is also a bottleneck! At the same time, as far as the current UML2.0 (the latest one obtained from OMG) is concerned, it is not enough as the language for establishing the entire MDA system. At the same time, it seems that some definitions in MOF still need to be improved. Because for the entire system, MOF should be used more as a standard, and only when the standard is mature can it be possible to generate correct mapping rules.
  6. Until the day when MDA is full of glory, some programmers will be unemployed, but not all of them. At least MDA tools must be made by someone, because one MDA tool is not enough to deal with all fields. It’s like there is no one financial system that works for all businesses. Because the standardization in each field is different.
  • MDA’s process

The implementation of MDA mainly focuses on the following three steps:

  1. First, you use UML to model your application domain at a high level of abstraction, and this model has absolutely nothing to do with the technology (or underlying technology) that implements it. We call this model the Platform Independent Model (PIM).
  2. The PIM will then be transformed into one or more Platform Dependent Models (PSMs). This translation process is generally automated. PSM will describe your system with a specific implementation technique. It will use the various frameworks provided by this technology, such as EJB, database model, COM components and so on.
  3. Finally, PSM will be translated into source code. Because each PSM already relies entirely on a specific technology, this step is generally relatively straightforward.

The hardest step in the MDA process is generating a PSM from the PIM. It requires you to have a rich and solid knowledge of the underlying technology you are applying, on the other hand the source model (PIM) must have enough information to automatically generate the PSM.

  • Generated by template: MDA-light?!

In the practical application of MDA, an easier implementation is through templates (we call it MDA-light). In this way, the platform-dependent model step can be said to be skipped, and you can generate source code directly from the highly abstracted PIM. You will continue to do real programming on the basis of MDA-light: you must write the detailed application logic in source code, not UML.

  • Prerequisites for using MDA

It is a widely accepted fact in the industry (and even the world) that only change is permanent. Technology is always innovating. This is especially evident in the middleware space, and of course database technologies, operating systems, and even programming languages change frequently. These technologies obviously change faster than the basic concepts of the application domain.

If you work in a specific application area, projects in that area all share a certain similarity. If an entire application family or different projects belong to the same application domain, then MDA or the generation process will be especially suitable for you.

  • Advantages of MDA

Your investment in modeling will be more lasting and effective — far longer than the technology you currently use to achieve it. This will better protect your investment.

You have technical flexibility.

You will no longer be affected by the different cycles of change that a technology or application has – with the help of MDA, you can maintain diversity in both directions neutrally.

  • Disadvantages of MDA

MDA means more “assembly” than “development” — you basically have no technical wiggle room when building PIM for an application. This is still unimaginable for many developers today.

The creativity of software development has diminished to a certain extent. Developers often find it fascinating to argue about a new technology and work on the cutting edge of it. However, under the MDA process, a lot of work is to build models, which is far from the specific technology, but in line with OMG’s recommendations.

Potential immaturity. UML2.0 is still in its infancy. MDA tools have also been around for a relatively short time. There is also a lot of risk hidden here.

  • Problems to be solved in MDA process and generation development

Migration of data and applications: A problem that is often faced in the business world today is how to migrate large amounts of data and applications to new, MDA-based systems. A pure MDA process would treat the data model and database table structure as technical details. They shouldn’t have any impact on the Platform Independent Model (PIM) layer — so, is your MDA tool or generator responsible for generating database scripts as well?

Software maintenance: The preparation of different releases, patches or upgrades is an important part of maintaining a currently running program. How does MDA deal with these problems? Doing a fresh install every time?

Return-on-Investment: What environment and system to start with? Second project from applying MDA? Or start with the fifth?

Generators and related tools create a dependency on their producers — a dependency on producers that we’ve tried so hard to avoid in the past.

Enterprise Application Integration (EAI): A high level of abstraction, sounds good — but how do you get that abstraction for an application that’s already running?

You can see — potentially a lot of practical questions (all of which have important answers). These questions are why we created openMDA: in many projects, some of the above questions have been experimentally answered, and you (and us) will all benefit from them!

  • MDA’s Software Development Cycle

The software development process in MDA is driven by the modeling behavior of the software system. The following is the software development cycle of MDA:

The MDA life cycle is not much different from the traditional life cycle. The main difference is the artifacts created by the development process, including PIM (Platform Independent Model, platform independent model), PSM (Platform specific Model, platform dependent model) and code. PIM is a model with a high level of abstraction, independent of any implementation technology. PIMs are converted to one or more PSMs. PSM is tailored for a specific implementation technology. For example, the EJB PSM is a system model expressed in EJB structures. The final step in development is to transform each PSM into code, which is closely related to the application technology. The traditional development process transformation from model to model, or from model to code, is done manually. But the transformation of MDA is done automatically by the tool. From PIM to PSM, and from PSM to code can be implemented by tools. PIM, PSM, and Code models are used as design artifacts in the software development life cycle, in the traditional way of development are documents and diagrams. Importantly, they represent different levels of abstraction to the system, looking at our system from different perspectives, and the ability to convert high-level PIM to PSM raises the level of abstraction. It enables developers to understand the entire architecture of the system more clearly, without being “polluted” by specific implementation technologies, and reduces the workload of developers for complex systems.

The emergence of MDA points out the solution for improving the efficiency of software development, enhancing the portability, interoperability and maintainability of software, and the convenience of documentation. MDA is predicted by the object-oriented technology community as the most important methodology for the next two years. The main problem with modeling today is that for many businesses it’s just a paper exercise. This creates a problem that the model and the code are out of sync, the code will be constantly modified, but the model will not be updated, so the model loses its meaning. The key to bridging the gap between modeling and development is to make modeling an integral part of development. MDA is a framework for model-driven development, and the vision of MDA is to define a new way of describing and creating systems. MDA makes UML useful beyond just beautiful pictures. Many experts predict that MDA may lead us into another golden age of software development.

  • MDA framework

MDA separates the model of the software system into a platform-independent model PIM and a specific platform model PSM, and at the same time unifies them through transformation rules, trying to get rid of the dilemma caused by the change of requirements in this way. The platform-independent model PIM is a high-level abstraction of the system, which does not include any information related to the implementation technology; the platform-specific model PSM is a platform-specific model. In the MDA framework, a platform-independent modeling language is used to build a platform-independent model PIM, and then according to the mapping rules of the specific platform and implementation language, the PIM is converted to generate a platform-dependent model PSM, and finally the application code and test are generated. frame.

The “building materials” of the MDA framework include: high-level models; one or more standard, well-defined languages in which to write high-level models; definitions of how to transform PIM into PSM; the language in which these definitions are written, the language Can be executed by a transform tool; a tool capable of executing transform definitions; a transform tool capable of executing PSM to code.

The figure above is the framework of MDA, and its main elements are model, PIM, PSM, language, transformation, transformation definition, and transformation tools. MDA is an open, software-vendor-neutral architecture that broadly supports different application domains and technology platforms, and can be a lever between application domains and specific technology platforms. In the MDA development approach, PIM stands for modeling of requirements, and PSM stands for model after application of specific technologies, which makes MDA a lever between requirements and technologies; their respective changes can be independent of each other and will not cause commercial The tight coupling of logic and implementation technology, while MDA can bridge the gap between them through transformation, thus protecting our investment. The MDA development approach enables our system to be flexibly implemented, integrated, maintained and tested. The portability, interoperability and reusability of the system can be maintained for a long time and can cope with future changes.

  • Status of MDA

MDA is still in a process of development, and MDA is still evolving. Although MDA is coming vigorously, people can also see its problems. The biggest benefit of MDA is the lasting value of the business model, but at the cost of adding layers of abstraction, and it seems that the transition between layers is not as smooth as we expected, at least, from PIM to PSM, from PSM to code, which is far more difficult than generating machine code from 3GL. In terms of modeling technology, UML is exposing its inherent defects. It needs to expand more mechanisms to support accurate modeling and analysis models. Although OCL currently provides certain support for accurate modeling, this support is far from executable. The ideal of the model is still far away. Looking back on the history of MDA, we can see that the great success of UML has laid a solid foundation for the emergence of MDA, and we also feel that in the long road from software technology to software engineering, MDA is only a step forward. It is a small step, but it has caused a wave of waves in the entire software industry. It will have a profound impact on future IT technology in many aspects such as model definition and development process.

The current situation in the MDA development tool market is: as the standardization of the conversion method from PIM to PSM has not been completed, large manufacturers such as IBM and Borland are mostly cautious, although they also provide some MDA functions in their development tools, but Does not fully follow the MDA specification defined by the OMG. Even so, in addition to adding MDA functions in Rational, IBM also proposed EMF (Eclipse Modeling Framework), an innovative MDA code generation system project in the open source project Eclipse. The emphasis on technology. Borland announced that they are also focusing on MDA technology, and are ready to configure the automatic MDA-based model generation function in Together. Compared with the calmness and restraint of the big manufacturers in the industry, some small and medium-sized manufacturers are particularly active, such as the famous ArcStyler of Interactive Objects, the famous OptimalJ of Compuware, and the open source AndroMDA and other MDA tools that follow the OMG standard specifications have been used in some projects. It has been widely used and achieved remarkable results.

  • MDA related standards

In order to realize the grand vision of MDA, OMG has developed a series of standards:

UML: UML is used by MDA to describe various models. It is not born for MDA, but as the most popular modeling language at present, UML has occupied 90% of the global modeling language field and has become the de facto standard of modeling language, so OMG regards it as MDA technology The foundation is a natural, sensible choice. It is the foundation of MDA and its most powerful weapon.

MOF: MOF (Meta Object Facility) is a higher-level abstraction than UML. Its purpose is to describe UML extensions or other possible future UML-like modeling languages. Although MOF is not born for MDA, we can appreciate the good intentions and long-term vision of OMG engineers.

XMI: XMI (XML-based metadata Interchange) is an XML-based metadata exchange. It defines an XML-based data exchange format for various models through a standardized XML document format and DTDs (Document Type Definitions). This allows the model as a final product to be delivered in a variety of different tools, which is very important to ensure that MDA does not break one constraint and then add a new constraint.

CWM: CWM (Common Warehouse Metamodel) provides a means of data format transformation. CWM can be used on any level of model to describe the mapping rules between two data models, such as converting data entities from relationships. The database is transformed into XML format. Under the framework of MOF, CWM makes a general data model transformation engine possible.

In the blueprint of OMG, a series of standards such as UML, MOF, XMI, and CWM respectively solve the problems of model establishment, model extension, model exchange and model transformation of MDA. OMG attempts to expand the scope of application of MDA through standardized definitions. At the same time, through such an extensible modeling language environment, IT manufacturers can freely implement their own modeling language and the mapping of language to executable code, but no matter what, they must be under the standardized framework of OMG.

summary

This chapter introduces the SaaS development model. Through the introduction of the key technologies for realizing SaaS software, let us have a purposeful understanding of this aspect. The product line production of the software factory originates from the traditional manufacturing industry. Whether the assembly line operation in the manufacturing industry can be applied in the software industry still faces some problems, but it is not impossible to truly realize the factoryization of software. When it comes to development, the system architecture must be indispensable. The software architecture is mainly a layered problem of software. In this section, both .net and J2ee are used to illustrate the architecture of the software through examples. Product development is not only a technical road but also an enterprise

business decision-making issues. Different companies can adopt effective and optimal R&D models according to the actual situation of the company. Establishing and accumulating your own development system will help you reuse code and greatly reduce development costs.

Leave a Comment

Your email address will not be published. Required fields are marked *