Está en la página 1de 5

IBM Cognos Dynamic Cubes FAQ

Q1. What is IBM Cognos Dynamic Cubes? To summarize, Cognos Dynamic Cubes is an extension of IBM Cognos Dynamic Query that leverages substantial in-memory data assets as well as aggregate awareness in order to achieve high performance interactive analysis & reporting over terabytes of warehouse data. IBM Cognos Dynamic Cubes requires a data warehouse structured in a star or snowflake schema in order to maximize the performance characteristics of the solution. Q2. How do I create aggregates? You create aggregates using the Aggregate Advisor, which is launched from Dynamic Query Analyzer. The Advisor can review a cube and recommend either in-database or in-memory aggregates, which it determines, will help performance. The Advisor can also review a workload, such as reports, packages, busy times, or specific users for which to optimize aggregates. This allows you to be very selective about how you use memory to accelerate performance, focusing on the most important aspects of your application. In-memory aggregates can then be enabled by simply saving the recommendations to the content store, and restarting your cube. No re-authoring or re-modeling is required for in-memory aggregates, making it easy to address performance challenges. Once in-database aggregates are created and modeled into the cube definition, queries from the reporting layer will automatically route to these aggregate tables, making a significant difference on performance. Q3. Is this a replacement for IBM Cognos PowerCubes? No. Cognos Dynamic Cubes address a different problem than Cognos PowerCubes. Whereas Cognos Dynamic Cubes focuses on a large volume data warehouse, Cognos PowerCubes are ideal to help analyze data from an operational / transactional system, which typically pose performance challenges and whose volumes are typically lower than data warehouses. Creating a Cognos PowerCube from these systems moves and transforms the data, and structures it in a way that is optimal for reporting and analysis. However, any cube or MOLAP technology has inherent architectural limitations. To properly scale when data volumes expand into the terabytes, a properly structured data warehouse is the industry standard to enable analytics. Flexible in-memory acceleration provides a scalable architecture to enable high-performance interactive analysis directly on the data warehouse.

Q4. Why did we add this capability? The primary reason we added this capability is to enable high-speed interactive analysis and reporting over terabytes of data. Dimensionally Modeled Relational (DMR)/OLAP Over Relational (OOR) solutions work well over low to medium data volumes, but when the number of fact table rows is higher than approximately 20 25 million, performance starts to suffer, and end users become dissatisfied with the analysis experience. Because data volumes are exploding, and IT organizations increasingly want to enable selfservice BI rather than create a separate report to answer each business question, this type of flexible in-memory acceleration is critical. Q5. Is this a brand new query engine? No. This is an extension of the existing Dynamic Query Layer, leveraging the modern 64-bit query architecture as well as the SQL optimization that ensures that we send the right SQL to the right data warehouse technology. Q6. What data volumes can this technology deal with? Cognos Dynamic Cubes is designed to provide high performance over data warehouses in the terabytes. In the labs, tests are being executed with data volumes in the terabytes, leveraging servers with Java heap sizes in the hundreds of gigabytes. Q7. Does it create physical cubes such as Cognos PowerCubes or IBM Cognos TM1? No. Cognos Dynamic Cubes are cube definitions that can include in-memory aggregates and query routing to database aggregate tables, but it maintains an active connection to the underlying database this is sometimes referred to as an in-memory ROLAP engine. One of the benefits of this architecture is that you can drill down to details within the same report or area of analysis, rather than being forced to create a drill through to a detail report from a MOLAP cube. Q8. How should Cognos Dynamic Cubes be considered relative to other IBM Cognos OLAP technologies? Different data requirements require different data solutions. One data path cannot be proficient at solving widely different data problems. Because of this, IBM Cognos offers technologies that are built to suit specific application requirements. The following table is intended to help you better understand the primary use case for each technology, and position them accordingly. However, your individual application requirements must be carefully taken into consideration when making a decision.

Cube Technology Primary Use Cases IBM Cognos TM1 Optimal for write-back, what-if analysis, planning and In-memory MOLAP budgeting, or other specialized applications. technology with writeback support Is able to handle medium data volumes. Aggregation occurs on the fly, which can impact performance with high data and high user volumes. IBM Cognos Dynamic Optimal for read-only reporting and analytics over large data Cubes volumes. In-memory accelerator for Aggregate-aware, with extensive in-memory caching for dimensional analysis performance. Star or snowflake schema required in underlying database (highly recommended to maximize performance). Optimal to provide consistent interactive analysis experience to large number of users when the data source is an operational/transactional system, and a star or snowflake data structure cannot be achieved. Cube (MOLAP) architecture that includes pre-aggregation is such that scalability requires careful management using cube groups. Data latency is inherent with any MOLAP cube technology, where data movement into the cube is required. OLAP Over Optimal to easily create a dimensional data exploration Relational (OOR) experience over low data volumes in an Dimensional view of a operational/transactional system, and where latency needs to relational database be carefully managed. Caching on the Cognos Dynamic Query server helps performance. Processing associated with operational/transactional systems impacts performance. Q9. What databases are supported? In its first release (Cognos BI V10.2), Cognos Dynamic Cubes will support the following relational data sources: IBM DB2 IBM Netezza Microsoft SQL Server

IBM Cognos PowerCubes File-based MOLAP cube with preaggregation

Oracle Teradata Q10. What are the ideal circumstances where Cognos Dynamic Cubes is the right solution? Organizations with large data volumes that have enterprise data warehouses with star or snowflake schemas are ideal. Organizations that report directly from operational or transactional systems will not be able to use Cognos Dynamic Cubes because a star or snowflake schema is a requirement. For those who do not want to invest in an enterprise data warehouse, using our other OLAP technologies will be required: Cognos PowerCubes or OLAP Over Relational for read-only requirements IBM Cognos TM1 for write-back / what-if / high-volatility requirements Q11. Is there an extra charge for Cognos Dynamic Cubes? There are no additional license roles that need to be purchased in order to use Dynamic Cubes. Existing roles such as "Administrator" and "Modeler" apply as usual, as Cognos Dynamic Cubes is part of the BI query layer. For organizations on a PVU pricing model, be aware that there might be an impact to the number of cores being used in the application, as this is a memory-intensive technology and a larger server may be needed to support a growing application. Q12. What types of response times can I expect? When you leverage aggregates, in-memory or in-database, your response time will improve dramatically when compared to doing on-the-fly summary computation. In the labs, our testing using only in-memory aggregates gave us extremely fast performance. We were able to see over 80% of queries returned under 3 seconds, with the majority sub-second. This testing was done with database volumes in the terabytes, and did not include the data cache being warmed by regular system usage, as would normally be the case. They strictly relied on aggregates. This type of performance makes a dramatic difference when doing interactive data analysis. When users drill down to follow a path based on discovering an outlier in a summary report, each level requires its own set of summaries to be returned. Having these summaries precomputed in-memory or in-database gives users an instantaneous response, rather than computing the summary on demand. Note, however, that detail reports that are typically accomplished with relational queries, may not be helped by the in-memory aggregates in Cognos Dynamic Cubes, though the data cache

may help if it has been loaded in-memory by other users, or through scheduled cache-priming jobs. Q13. What user interfaces are supported when using Cognos Dynamic Cubes? Because Cognos Dynamic Cubes is part of the Cognos BI Dynamic Query Layer, the data can be surfaced through any of the regular Cognos BI interfaces such as Report Studio or Business Workspace and Business Workspace Advanced. Q14. How do I model a Cognos Dynamic Cube? Cognos Dynamic Cubes are modeled with a built-for-purpose modeling tool called IBM Cognos Cube Designer. It connects directly to the data warehouse, and leverages existing relationships between fact tables and its dimension tables to accelerate the creation of cubes. It leverages modern and intuitive design principles in order to create an intuitive modeling experience. In beta testing, modelers who were used to Transformer found it to be an easy and positive transition. Q15. Does Cognos Dynamic Cubes have any relationship with Cubing Services? Yes. In fact, Cognos Dynamic Cubes is a generational evolution of the Cubing Services technology.

También podría gustarte