嗨@dkangrga,我是北极星的产品经理. Caveating with the usual roadmap statements that we haven't yet made Polaris generally available, so things are subject to change - I can explain the following between Polaris and Hypermodels. Polaris uses a completely new underlying engine which is a natively sparse engine. That means that the amount of memory/workspace used by a line item does not depend on the dimensionality, rather it depends on the number of populated (non-zero for numeric) cells. So if I create a 10 billion cell line item in Polaris that is all zeros, it will require zero bytes of workspace. Note though that every populated (non-zero for numeric) cell in Polaris requires more memory/workspace (about 24 bytes) than every cell in the Classic engine.(about 8 bytes). So there is a trade off. Workspace size for a Polaris model is driven by the number of populated cells (including primary, aggregate, and calculated cells) and not the dimensionality. Currently Hypermodels means the current Classic engine with greater than 130GB workspace size. That can be necessary when the amount of data being modelled is large enough - whether or not that business problem is highly sparse. We do plan to support Polaris Hypermodels - so Polaris Workspaces with >130GB size, as well as standard Polaris workspaces (up to 130GB), but as we are still early in an EA phase with Polaris we will need to see more data to understand best practice for those scenarios.
... View more