• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

MongoDB in conjuction with JPA

 
security forum advocate
Posts: 236
1
Android Flex Google App Engine
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
For a project that I am working on, we have requirements where we are supposed to replicate data from DB2 to mongo. Now the object that is returned back from DB2 is about 70M (an eagerly loaded object) with multiple clobs and blobs. While I am looking into GridFS for sharding it before I'd save it to mongo, I wonder if there were any suggested ways of doing this?
 
author
Posts: 17
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I suppose it depends on how you want to use the object once it's in MongoDB. You certainly can choose to just shove everything into a GridFS "file", but you lose query-ability (a GridFS file should be treated like a BLOB would be in a RDBMS).

If the object is over 70MB and you don't want to decompose it structurally, you are going to need to store it in GridFS, since MongoDB documents are limited to 16MB each.

Hope that helps!
 
reply
    Bookmark Topic Watch Topic
  • New Topic