Download PDFOpen PDF in browserFrank Wolfe Algorithm for Nonmonotone Onesided Smooth Function Maximization ProblemEasyChair Preprint no. 9245, version 29 pages•Date: November 4, 2022AbstractIn this paper, we study the problem of maximizing a nonmonotone onesided$\eta$ smooth ($OSS$ for short) function $\psi(x)$ under a downwardsclosed convex polytope constraint. The concept of $OSS$ was first proposed by Mehrdad et al. to express the properties of multilinear extension of some set functions. It is a generalization of the continuous $DR$ submodular function. The $OSS$ property guarantees an alternative bound based on Taylor expansion. If the objective function is nonmonotone diminishing return ($DR$) submodular, Bian et al. gave a $1/e$ approximation algorithm with a regret bound $O(\frac{LD^{2}}{2K})$. On general convex sets, D$\ddot{u}$rr et al. gave a $\frac{1}{3\sqrt{3}}$ approximation solution with $O(\frac{LD^{2}}{(\ln K)^{2}})$ regrets. In this paper, we consider maximizing the more general $OSS$ function, and by adjusting the iterative step of the JumpStart Frank Wolfe algorithm, an approximation of $1/e$ can still be obtained in the case of a larger regret bound $O(\frac{L(\mu D)^{2}}{2K})$. (where $L,\mu, D$ are some parameters, see Table \ref{tab:1}). The larger the parameter $\eta$ we choose, the more regrets we will receive, because of $\mu=\left(\frac{\beta}{\beta+1}\right)^{2\eta}$ $(\beta\in (0,1])$. Keyphrases: approximation algorithm, FrankWolfe, Nonmonotone, Onesided smooth
