GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Association for Computing Machinery (ACM)  (3)
Material
Publisher
  • Association for Computing Machinery (ACM)  (3)
Person/Organisation
Language
Years
  • 1
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2012
    In:  ACM Transactions on Knowledge Discovery from Data Vol. 5, No. 4 ( 2012-02), p. 1-31
    In: ACM Transactions on Knowledge Discovery from Data, Association for Computing Machinery (ACM), Vol. 5, No. 4 ( 2012-02), p. 1-31
    Abstract: We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multitask learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is nonconvex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose employing the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is nondifferentiable and the feasible domain is nontrivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of the projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and a Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in detail. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multitask learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multitask learning formulation and the efficiency of the proposed projected gradient algorithms.
    Type of Medium: Online Resource
    ISSN: 1556-4681 , 1556-472X
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2012
    detail.hit.zdb_id: 2257358-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2012
    In:  ACM SIGKDD Explorations Newsletter Vol. 14, No. 1 ( 2012-12-10), p. 4-15
    In: ACM SIGKDD Explorations Newsletter, Association for Computing Machinery (ACM), Vol. 14, No. 1 ( 2012-12-10), p. 4-15
    Abstract: Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the '1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data.
    Type of Medium: Online Resource
    ISSN: 1931-0145 , 1931-0153
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2012
    detail.hit.zdb_id: 2082223-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Association for Computing Machinery (ACM) ; 2016
    In:  ACM Transactions on Knowledge Discovery from Data Vol. 10, No. 3 ( 2016-02-24), p. 1-24
    In: ACM Transactions on Knowledge Discovery from Data, Association for Computing Machinery (ACM), Vol. 10, No. 3 ( 2016-02-24), p. 1-24
    Abstract: Linear regression is a widely used tool in data mining and machine learning. In many applications, fitting a regression model with only linear effects may not be sufficient for predictive or explanatory purposes. One strategy that has recently received increasing attention in statistics is to include feature interactions to capture the nonlinearity in the regression model. Such model has been applied successfully in many biomedical applications. One major challenge in the use of such model is that the data dimensionality is significantly higher than the original data, resulting in the small sample size large dimension problem. Recently, weak hierarchical Lasso, a sparse interaction regression model, is proposed that produces a sparse and hierarchical structured estimator by exploiting the Lasso penalty and a set of hierarchical constraints. However, the hierarchical constraints make it a non-convex problem and the existing method finds the solution to its convex relaxation, which needs additional conditions to guarantee the hierarchical structure. In this article, we propose to directly solve the non-convex weak hierarchical Lasso by making use of the General Iterative Shrinkage and Thresholding (GIST) optimization framework, which has been shown to be efficient for solving non-convex sparse formulations. The key step in GIST is to compute a sequence of proximal operators. One of our key technical contributions is to show that the proximal operator associated with the non-convex weak hierarchical Lasso admits a closed-form solution. However, a naive approach for solving each subproblem of the proximal operator leads to a quadratic time complexity, which is not desirable for large-size problems. We have conducted extensive experiments on both synthetic and real datasets. Results show that our proposed algorithm is much more efficient and effective than its convex relaxation. To this end, we further develop an efficient algorithm for computing the subproblems with a linearithmic time complexity. In addition, we extend the technique to perform the optimization-based hierarchical testing of pairwise interactions for binary classification problems, which is essentially the proximal operator associated with weak hierarchical Lasso. Simulation studies show that the non-convex hierarchical testing framework outperforms the convex relaxation when a hierarchical structure exists between main effects and interactions.
    Type of Medium: Online Resource
    ISSN: 1556-4681 , 1556-472X
    Language: English
    Publisher: Association for Computing Machinery (ACM)
    Publication Date: 2016
    detail.hit.zdb_id: 2257358-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...