spark + apache zeppelin + web send-task-show-result 1b
$250-750 USD
Cerrado
Publicado hace más de 7 años
$250-750 USD
Pagado a la entrega
zeppelin based code to take 2 types of command from html page to spark cluster
[login to view URL] file that is already on spark cluster and keep it in redis with ID in zepplin web session (front end will send path that is valid on back end)
2. get DataFrame of (1), do a txn on that based on some spark scala code already written
To do and Deliverables :
1. understand current scala code
2. integrate it with zeppelin
3 . We have skeleton HTML without any JS. You need to use JQuery (latest) or any version of angular to talk to zeppelin web socket end point to send the two commands, get reply and show current data on page. For now will have a small 200 record file. Paging not needed. We will scale later.
Need zeplin expert who can use our gitlab git to get the current code and commit and push their code back to us.
Must start Monday and finish Tuesday evening. So looking for someone who is available now and already has worked with scala with zeppelin and spark. Its dev only so no need of a real cluster but on dev local must have redis, zeppelin, spark (latest that works with latest zeppelin and scala (2.11)
[login to view URL] You do not have to write the code for the actual spark data manipulation. that is already there and works fine. need help : 1. from web page to apache zeppelin 2. apache zeppelin to redis cache - java controller 3. apache zeppelin to existing spark (open file show 100 lines on html, do one transform on it, send results back and update web page)
Hi is this still available?
If there is still time, we could look at building your solution ASAP.
I have my own instance of zeppelin notebook and spark cluster running on my machine(and in 'Digital Ocean').
I have used both for general machine learning tasks using scala and cassandra.
I have 2 years of scala experience and more specifically dev ops in the scala/js environment especially using virtualisation such as docker and its management using kubernetes.
I have yet to use DataFrames but have used RDD's and graphX graphs to do predictive analytics on a few pretty large data sets.
Hi at Digiroo we have a team of dedicated Big Data Architects, Big Data engineers and developers. We recently implemented Big Data solutions including spark and Zeppelin at NFU Mutual a leading financial services company in the UK. I can assign a Big Data Engineer and spark/scala developer to your project and manage the account personally to ensure requirements are captured and documented to your standards and deliverables meet expectations.
Contact me for more information about how we can help.